1. 16 Aug, 2019 9 commits
    • Jaegeuk Kim's avatar
      f2fs: fix livelock in swapfile writes · 75a037f3
      Jaegeuk Kim authored
      This patch fixes livelock in the below call path when writing swap pages.
      
      [46374.617256] c2    701  __switch_to+0xe4/0x100
      [46374.617265] c2    701  __schedule+0x80c/0xbc4
      [46374.617273] c2    701  schedule+0x74/0x98
      [46374.617281] c2    701  rwsem_down_read_failed+0x190/0x234
      [46374.617291] c2    701  down_read+0x58/0x5c
      [46374.617300] c2    701  f2fs_map_blocks+0x138/0x9a8
      [46374.617310] c2    701  get_data_block_dio_write+0x74/0x104
      [46374.617320] c2    701  __blockdev_direct_IO+0x1350/0x3930
      [46374.617331] c2    701  f2fs_direct_IO+0x55c/0x8bc
      [46374.617341] c2    701  __swap_writepage+0x1d0/0x3e8
      [46374.617351] c2    701  swap_writepage+0x44/0x54
      [46374.617360] c2    701  shrink_page_list+0x140/0xe80
      [46374.617371] c2    701  shrink_inactive_list+0x510/0x918
      [46374.617381] c2    701  shrink_node_memcg+0x2d4/0x804
      [46374.617391] c2    701  shrink_node+0x10c/0x2f8
      [46374.617400] c2    701  do_try_to_free_pages+0x178/0x38c
      [46374.617410] c2    701  try_to_free_pages+0x348/0x4b8
      [46374.617419] c2    701  __alloc_pages_nodemask+0x7f8/0x1014
      [46374.617429] c2    701  pagecache_get_page+0x184/0x2cc
      [46374.617438] c2    701  f2fs_new_node_page+0x60/0x41c
      [46374.617449] c2    701  f2fs_new_inode_page+0x50/0x7c
      [46374.617460] c2    701  f2fs_init_inode_metadata+0x128/0x530
      [46374.617472] c2    701  f2fs_add_inline_entry+0x138/0xd64
      [46374.617480] c2    701  f2fs_do_add_link+0xf4/0x178
      [46374.617488] c2    701  f2fs_create+0x1e4/0x3ac
      [46374.617497] c2    701  path_openat+0xdc0/0x1308
      [46374.617507] c2    701  do_filp_open+0x78/0x124
      [46374.617516] c2    701  do_sys_open+0x134/0x248
      [46374.617525] c2    701  SyS_openat+0x14/0x20
      Reviewed-by: default avatarChao Yu <yuchao0@huawei.com>
      Signed-off-by: default avatarJaegeuk Kim <jaegeuk@kernel.org>
      75a037f3
    • Linus Torvalds's avatar
      Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux · b7e7c85d
      Linus Torvalds authored
      Pull arm64 fixes from Catalin Marinas:
      
       - Don't taint the kernel if CPUs have different sets of page sizes
         supported (other than the one in use).
      
       - Issue I-cache maintenance for module ftrace trampoline.
      
      * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
        arm64: ftrace: Ensure module ftrace trampoline is coherent with I-side
        arm64: cpufeature: Don't treat granule sizes as strict
      b7e7c85d
    • Will Deacon's avatar
      arm64: ftrace: Ensure module ftrace trampoline is coherent with I-side · b6143d10
      Will Deacon authored
      The initial support for dynamic ftrace trampolines in modules made use
      of an indirect branch which loaded its target from the beginning of
      a special section (e71a4e1b ("arm64: ftrace: add support for far
      branches to dynamic ftrace")). Since no instructions were being patched,
      no cache maintenance was needed. However, later in be0f272b ("arm64:
      ftrace: emit ftrace-mod.o contents through code") this code was reworked
      to output the trampoline instructions directly into the PLT entry but,
      unfortunately, the necessary cache maintenance was overlooked.
      
      Add a call to __flush_icache_range() after writing the new trampoline
      instructions but before patching in the branch to the trampoline.
      
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: <stable@vger.kernel.org>
      Fixes: be0f272b ("arm64: ftrace: emit ftrace-mod.o contents through code")
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      b6143d10
    • Linus Torvalds's avatar
      Merge tag 'pm-5.3-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm · 2d63ba3e
      Linus Torvalds authored
      Pull power management fixes from Rafael Wysocki:
       "These add a check to avoid recent suspend-to-idle power regression on
        systems with NVMe drives where the PCIe ASPM policy is "performance"
        (or when the kernel is built without ASPM support), fix an issue
        related to frequency limits in the schedutil cpufreq governor and fix
        a mistake related to the PM QoS usage in the cpufreq core introduced
        recently.
      
        Specifics:
      
         - Disable NVMe power optimization related to suspend-to-idle added
           recently on systems where PCIe ASPM is not able to put PCIe links
           into low-power states to prevent excess power from being drawn by
           the system while suspended (Rafael Wysocki).
      
         - Make the schedutil governor handle frequency limits changes
           properly in all cases (Viresh Kumar).
      
         - Prevent the cpufreq core from treating positive values returned by
           dev_pm_qos_update_request() as errors (Viresh Kumar)"
      
      * tag 'pm-5.3-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
        nvme-pci: Allow PCI bus-level PM to be used if ASPM is disabled
        PCI/ASPM: Add pcie_aspm_enabled()
        cpufreq: schedutil: Don't skip freq update when limits change
        cpufreq: dev_pm_qos_update_request() can return 1 on success
      2d63ba3e
    • Linus Torvalds's avatar
      Merge tag 'dmaengine-fix-5.3-rc5' of git://git.infradead.org/users/vkoul/slave-dma · 9da5bb24
      Linus Torvalds authored
      Pull dmaengine fixes from Vinod Koul:
       "Fixes in dmaengine drivers for:
      
         - dw-edma: endianess, _iomem type and stack usages
      
         - ste_dma40: unneeded variable and null-pointer dereference
      
         - tegra210-adma: unused function
      
         - omap-dma: off-by-one fix"
      
      * tag 'dmaengine-fix-5.3-rc5' of git://git.infradead.org/users/vkoul/slave-dma:
        omap-dma/omap_vout_vrfb: fix off-by-one fi value
        dmaengine: stm32-mdma: Fix a possible null-pointer dereference in stm32_mdma_irq_handler()
        dmaengine: tegra210-adma: Fix unused function warnings
        dmaengine: ste_dma40: fix unneeded variable warning
        dmaengine: dw-edma: fix endianess confusion
        dmaengine: dw-edma: fix __iomem type confusion
        dmaengine: dw-edma: fix unnecessary stack usage
      9da5bb24
    • Linus Torvalds's avatar
      Merge tag 'sound-5.3-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound · cfa0bb2a
      Linus Torvalds authored
      Pull sound fixes from Takashi Iwai:
       "All small fixes targeted for stable:
      
         - Two fixes for USB-audio with malformed descriptor, spotted by
           fuzzers
      
         - Two fixes Conexant HD-audio codec wrt power management
      
         - Quirks for HD-audio AMD platform and HP laptop
      
         - HD-audio memory leak fix"
      
      * tag 'sound-5.3-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound:
        ALSA: usb-audio: Fix a stack buffer overflow bug in check_input_term
        ALSA: usb-audio: Fix an OOB bug in parse_audio_mixer_unit
        ALSA: hda - Add a generic reboot_notify
        ALSA: hda - Let all conexant codec enter D3 when rebooting
        ALSA: hda/realtek - Add quirk for HP Envy x360
        ALSA: hda - Fix a memory leak bug
        ALSA: hda - Apply workaround for another AMD chip 1022:1487
      cfa0bb2a
    • Linus Torvalds's avatar
      Merge tag 'drm-fixes-2019-08-16' of git://anongit.freedesktop.org/drm/drm · ec037ac2
      Linus Torvalds authored
      Pull drm fixes from Dave Airlie:
       "Nothing too crazy this week, one amdgpu fix to use vmalloc for a
        struct that grew in size, and another MST fix for nouveau, and some
        other misc fixes:
      
        i915:
         - single GVT use after free fix
      
        scheduler:
         - entity destruction race fix
      
        amdgpu:
         - struct allocation fix
         - gfx9 soft recovery fix
      
        nouveau:
         - followup MST fix
      
        ast:
         - vga register race fix"
      
      * tag 'drm-fixes-2019-08-16' of git://anongit.freedesktop.org/drm/drm:
        drm/nouveau: Only recalculate PBN/VCPI on mode/connector changes
        drm/ast: Fixed reboot test may cause system hanged
        drm/scheduler: use job count instead of peek
        drm/amd/display: use kvmalloc for dc_state (v2)
        drm/amdgpu: fix gfx9 soft recovery
        drm/i915: Use after free in error path in intel_vgpu_create_workload()
      ec037ac2
    • Rafael J. Wysocki's avatar
      Merge branch 'pm-cpufreq' · a3ee2477
      Rafael J. Wysocki authored
      * pm-cpufreq:
        cpufreq: schedutil: Don't skip freq update when limits change
        cpufreq: dev_pm_qos_update_request() can return 1 on success
      a3ee2477
    • Dave Airlie's avatar
      Merge tag 'drm-intel-fixes-2019-08-15' of... · a85abd5d
      Dave Airlie authored
      Merge tag 'drm-intel-fixes-2019-08-15' of git://anongit.freedesktop.org/drm/drm-intel into drm-fixes
      
      drm/i915 fixes for v5.4-rc5:
      - GVT use-after-free fix
      Signed-off-by: default avatarDave Airlie <airlied@redhat.com>
      From: Jani Nikula <jani.nikula@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/87zhkag9ic.fsf@intel.com
      a85abd5d
  2. 15 Aug, 2019 10 commits
  3. 14 Aug, 2019 10 commits
    • Linus Torvalds's avatar
      Merge tag 'Wimplicit-fallthrough-5.3-rc5' of... · 41de5963
      Linus Torvalds authored
      Merge tag 'Wimplicit-fallthrough-5.3-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux
      
      Pull fallthrough fixes from Gustavo A. R. Silva:
       "Fix sh mainline builds:
      
         - Fix fall-through warning in sh.
      
         - Fix missing break bug in sh (this is a 10-year-old bug)
      
        Currently, mainline builds for sh are broken. These patches fix that"
      
      * tag 'Wimplicit-fallthrough-5.3-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux:
        sh: kernel: hw_breakpoint: Fix missing break in switch statement
        sh: kernel: disassemble: Mark expected switch fall-throughs
      41de5963
    • Linus Torvalds's avatar
      Merge tag 'afs-fixes-20190814' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs · e22a97a2
      Linus Torvalds authored
      Pull afs fixes from David Howells:
      
       - Fix the CB.ProbeUuid handler to generate its reply correctly.
      
       - Fix a mix up in indices when parsing a Volume Location entry record.
      
       - Fix a potential NULL-pointer deref when cleaning up a read request.
      
       - Fix the expected data version of the destination directory in
         afs_rename().
      
       - Fix afs_d_revalidate() to only update d_fsdata if it's not the same
         as the directory data version to reduce the likelihood of overwriting
         the result of a competing operation. (d_fsdata carries the directory
         DV or the least-significant word thereof).
      
       - Fix the tracking of the data-version on a directory and make sure
         that dentry objects get properly initialised, updated and
         revalidated.
      
         Also fix rename to update d_fsdata to match the new directory's DV if
         the dentry gets moved over and unhash the dentry to stop
         afs_d_revalidate() from interfering.
      
      * tag 'afs-fixes-20190814' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs:
        afs: Fix missing dentry data version updating
        afs: Only update d_fsdata if different in afs_d_revalidate()
        afs: Fix off-by-one in afs_rename() expected data version calculation
        fs: afs: Fix a possible null-pointer dereference in afs_put_read()
        afs: Fix loop index mixup in afs_deliver_vl_get_entry_by_name_u()
        afs: Fix the CB.ProbeUuid service handler to reply correctly
      e22a97a2
    • Christian König's avatar
      drm/scheduler: use job count instead of peek · e1b4ce25
      Christian König authored
      The spsc_queue_peek function is accessing queue->head which belongs to
      the consumer thread and shouldn't be accessed by the producer
      
      This is fixing a rare race condition when destroying entities.
      Signed-off-by: default avatarChristian König <christian.koenig@amd.com>
      Acked-by: default avatarAndrey Grodzovsky <andrey.grodzovsky@amd.com>
      Reviewed-by: Monk.liu@amd.com
      Signed-off-by: default avatarAlex Deucher <alexander.deucher@amd.com>
      e1b4ce25
    • Linus Torvalds's avatar
      Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma · a8dba053
      Linus Torvalds authored
      Pull rdma fixes from Doug Ledford:
       "Fairly small pull request for -rc3. I'm out of town the rest of this
        week, so I made sure to clean out as much as possible from patchworks
        in enough time for 0-day to chew through it (Yay! for 0-day being back
        online! :-)). Jason might send through any emergency stuff that could
        pop up, otherwise I'm back next week.
      
        The only real thing of note is the siw ABI change. Since we just
        merged siw *this* release, there are no prior kernel releases to
        maintain kernel ABI with. I told Bernard that if there is anything
        else about the siw ABI he thinks he might want to change before it
        goes set in stone, he should get it in ASAP. The siw module was around
        for several years outside the kernel tree, and it had to be revamped
        considerably for inclusion upstream, so we are making no attempts to
        be backward compatible with the out of tree version. Once 5.3 is
        actually released, we will have our baseline ABI to maintain.
      
        Summary:
      
         - Fix a memory registration release flow issue that was causing a
           WARN_ON (mlx5)
      
         - If the counters for a port aren't allocated, then we can't do
           operations on the non-existent counters (core)
      
         - Check the right variable for error code result (mlx5)
      
         - Fix a use after free issue (mlx5)
      
         - Fix an off by one memory leak (siw)
      
         - Actually return an error code on error (core)
      
         - Allow siw to be built on 32bit arches (siw, ABI change, but OK
           since siw was just merged this merge window and there is no prior
           released kernel to maintain compatibility with and we also updated
           the rdma-core user space package to match)"
      
      * tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma:
        RDMA/siw: Change CQ flags from 64->32 bits
        RDMA/core: Fix error code in stat_get_doit_qp()
        RDMA/siw: Fix a memory leak in siw_init_cpulist()
        IB/mlx5: Fix use-after-free error while accessing ev_file pointer
        IB/mlx5: Check the correct variable in error handling code
        RDMA/counter: Prevent QP counter binding if counters unsupported
        IB/mlx5: Fix implicit MR release flow
      a8dba053
    • Hui Peng's avatar
      ALSA: usb-audio: Fix an OOB bug in parse_audio_mixer_unit · daac0715
      Hui Peng authored
      The `uac_mixer_unit_descriptor` shown as below is read from the
      device side. In `parse_audio_mixer_unit`, `baSourceID` field is
      accessed from index 0 to `bNrInPins` - 1, the current implementation
      assumes that descriptor is always valid (the length  of descriptor
      is no shorter than 5 + `bNrInPins`). If a descriptor read from
      the device side is invalid, it may trigger out-of-bound memory
      access.
      
      ```
      struct uac_mixer_unit_descriptor {
      	__u8 bLength;
      	__u8 bDescriptorType;
      	__u8 bDescriptorSubtype;
      	__u8 bUnitID;
      	__u8 bNrInPins;
      	__u8 baSourceID[];
      }
      ```
      
      This patch fixes the bug by add a sanity check on the length of
      the descriptor.
      Reported-by: default avatarHui Peng <benquike@gmail.com>
      Reported-by: default avatarMathias Payer <mathias.payer@nebelwelt.net>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarHui Peng <benquike@gmail.com>
      Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      daac0715
    • Linus Torvalds's avatar
      Merge tag 'dma-mapping-5.3-4' of git://git.infradead.org/users/hch/dma-mapping · e83b009c
      Linus Torvalds authored
      Pull dma-mapping fixes from Christoph Hellwig:
      
       - fix the handling of the bus_dma_mask in dma_get_required_mask, which
         caused a regression in this merge window (Lucas Stach)
      
       - fix a regression in the handling of DMA_ATTR_NO_KERNEL_MAPPING (me)
      
       - fix dma_mmap_coherent to not cause page attribute mismatches on
         coherent architectures like x86 (me)
      
      * tag 'dma-mapping-5.3-4' of git://git.infradead.org/users/hch/dma-mapping:
        dma-mapping: fix page attributes for dma_mmap_*
        dma-direct: don't truncate dma_required_mask to bus addressing capabilities
        dma-direct: fix DMA_ATTR_NO_KERNEL_MAPPING
      e83b009c
    • Linus Torvalds's avatar
      Merge tag 'iommu-fixes-v5.3-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu · b5e33e44
      Linus Torvalds authored
      Pull iommu fixes from Joerg Roedel:
      
       - A couple more fixes for the Intel VT-d driver for bugs introduced
         during the recent conversion of this driver to use IOMMU core default
         domains.
      
       - Fix for common dma-iommu code to make sure MSI mappings happen in the
         correct domain for a device.
      
       - Fix a corner case in the handling of sg-lists in dma-iommu code that
         might cause dma_length to be truncated.
      
       - Mark a switch as fall-through in arm-smmu code.
      
      * tag 'iommu-fixes-v5.3-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:
        iommu/vt-d: Fix possible use-after-free of private domain
        iommu/vt-d: Detach domain before using a private one
        iommu/dma: Handle SG length overflow better
        iommu/vt-d: Correctly check format of page table in debugfs
        iommu/vt-d: Detach domain when move device out of group
        iommu/arm-smmu: Mark expected switch fall-through
        iommu/dma: Handle MSI mappings separately
      b5e33e44
    • Linus Torvalds's avatar
      Merge branch 'akpm' (patches from Andrew) · cab6d5b6
      Linus Torvalds authored
      Merge misc VM fixes from Andrew Morton:
       "A bunch of hotfixes, all affecting mm/.
      
        The two-patch series from Andrea may be controversial. This restores
        patches which were reverted in Dec 2018 due to a regression report [*].
      
        After extensive discussion it is evident that the problems which these
        patches solved were significantly more serious than the problems they
        introduced. I am told that major distros are already carrying these
        two patches for this reason"
      
      [*] See
      
            https://lore.kernel.org/lkml/alpine.DEB.2.21.1812061343240.144733@chino.kir.corp.google.com/
            https://lore.kernel.org/lkml/alpine.DEB.2.21.1812031545560.161134@chino.kir.corp.google.com/
      
        for the google-specific issues brought up by David Rijentes. And as
        Andrew says:
      
          "I'm unaware of anyone else who will be adversely affected by this,
           and google already carries over a thousand kernel patches - another
           won't kill them.
      
           There has been sporadic discussion about fixing these things for
           real but it's clear that nobody apart from David is particularly
           motivated"
      
      * emailed patches from Andrew Morton <akpm@linux-foundation.org>:
        hugetlbfs: fix hugetlb page migration/fault race causing SIGBUS
        mm, vmscan: do not special-case slab reclaim when watermarks are boosted
        Revert "mm, thp: restore node-local hugepage allocations"
        Revert "Revert "mm, thp: consolidate THP gfp handling into alloc_hugepage_direct_gfpmask""
        include/asm-generic/5level-fixup.h: fix variable 'p4d' set but not used
        seq_file: fix problem when seeking mid-record
        mm: workingset: fix vmstat counters for shadow nodes
        mm/usercopy: use memory range to be accessed for wraparound check
        mm: kmemleak: disable early logging in case of error
        mm/vmalloc.c: fix percpu free VM area search criteria
        mm/memcontrol.c: fix use after free in mem_cgroup_iter()
        mm/z3fold.c: fix z3fold_destroy_pool() race condition
        mm/z3fold.c: fix z3fold_destroy_pool() ordering
        mm: mempolicy: handle vma with unmovable pages mapped correctly in mbind
        mm: mempolicy: make the behavior consistent when MPOL_MF_MOVE* and MPOL_MF_STRICT were specified
        mm/hmm: fix bad subpage pointer in try_to_unmap_one
        mm/hmm: fix ZONE_DEVICE anon page mapping reuse
        mm: document zone device struct page field usage
      cab6d5b6
    • Hui Wang's avatar
      ALSA: hda - Add a generic reboot_notify · 871b9066
      Hui Wang authored
      Make codec enter D3 before rebooting or poweroff can fix the noise
      issue on some laptops. And in theory it is harmless for all codecs
      to enter D3 before rebooting or poweroff, let us add a generic
      reboot_notify, then realtek and conexant drivers can call this
      function.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarHui Wang <hui.wang@canonical.com>
      Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      871b9066
    • Hui Wang's avatar
      ALSA: hda - Let all conexant codec enter D3 when rebooting · 401714d9
      Hui Wang authored
      We have 3 new lenovo laptops which have conexant codec 0x14f11f86,
      these 3 laptops also have the noise issue when rebooting, after
      letting the codec enter D3 before rebooting or poweroff, the noise
      disappers.
      
      Instead of adding a new ID again in the reboot_notify(), let us make
      this function apply to all conexant codec. In theory make codec enter
      D3 before rebooting or poweroff is harmless, and I tested this change
      on a couple of other Lenovo laptops which have different conexant
      codecs, there is no side effect so far.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarHui Wang <hui.wang@canonical.com>
      Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      401714d9
  4. 13 Aug, 2019 11 commits
    • Mike Kravetz's avatar
      hugetlbfs: fix hugetlb page migration/fault race causing SIGBUS · 4643d67e
      Mike Kravetz authored
      Li Wang discovered that LTP/move_page12 V2 sometimes triggers SIGBUS in
      the kernel-v5.2.3 testing.  This is caused by a race between hugetlb
      page migration and page fault.
      
      If a hugetlb page can not be allocated to satisfy a page fault, the task
      is sent SIGBUS.  This is normal hugetlbfs behavior.  A hugetlb fault
      mutex exists to prevent two tasks from trying to instantiate the same
      page.  This protects against the situation where there is only one
      hugetlb page, and both tasks would try to allocate.  Without the mutex,
      one would fail and SIGBUS even though the other fault would be
      successful.
      
      There is a similar race between hugetlb page migration and fault.
      Migration code will allocate a page for the target of the migration.  It
      will then unmap the original page from all page tables.  It does this
      unmap by first clearing the pte and then writing a migration entry.  The
      page table lock is held for the duration of this clear and write
      operation.  However, the beginnings of the hugetlb page fault code
      optimistically checks the pte without taking the page table lock.  If
      clear (as it can be during the migration unmap operation), a hugetlb
      page allocation is attempted to satisfy the fault.  Note that the page
      which will eventually satisfy this fault was already allocated by the
      migration code.  However, the allocation within the fault path could
      fail which would result in the task incorrectly being sent SIGBUS.
      
      Ideally, we could take the hugetlb fault mutex in the migration code
      when modifying the page tables.  However, locks must be taken in the
      order of hugetlb fault mutex, page lock, page table lock.  This would
      require significant rework of the migration code.  Instead, the issue is
      addressed in the hugetlb fault code.  After failing to allocate a huge
      page, take the page table lock and check for huge_pte_none before
      returning an error.  This is the same check that must be made further in
      the code even if page allocation is successful.
      
      Link: http://lkml.kernel.org/r/20190808000533.7701-1-mike.kravetz@oracle.com
      Fixes: 290408d4 ("hugetlb: hugepage migration core")
      Signed-off-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
      Reported-by: default avatarLi Wang <liwang@redhat.com>
      Tested-by: default avatarLi Wang <liwang@redhat.com>
      Reviewed-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Cyril Hrubis <chrubis@suse.cz>
      Cc: Xishi Qiu <xishi.qiuxishi@alibaba-inc.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4643d67e
    • Mel Gorman's avatar
      mm, vmscan: do not special-case slab reclaim when watermarks are boosted · 28360f39
      Mel Gorman authored
      Dave Chinner reported a problem pointing a finger at commit 1c30844d
      ("mm: reclaim small amounts of memory when an external fragmentation
      event occurs").
      
      The report is extensive:
      
        https://lore.kernel.org/linux-mm/20190807091858.2857-1-david@fromorbit.com/
      
      and it's worth recording the most relevant parts (colorful language and
      typos included).
      
      	When running a simple, steady state 4kB file creation test to
      	simulate extracting tarballs larger than memory full of small
      	files into the filesystem, I noticed that once memory fills up
      	the cache balance goes to hell.
      
      	The workload is creating one dirty cached inode for every dirty
      	page, both of which should require a single IO each to clean and
      	reclaim, and creation of inodes is throttled by the rate at which
      	dirty writeback runs at (via balance dirty pages). Hence the ingest
      	rate of new cached inodes and page cache pages is identical and
      	steady. As a result, memory reclaim should quickly find a steady
      	balance between page cache and inode caches.
      
      	The moment memory fills, the page cache is reclaimed at a much
      	faster rate than the inode cache, and evidence suggests that
      	the inode cache shrinker is not being called when large batches
      	of pages are being reclaimed. In roughly the same time period
      	that it takes to fill memory with 50% pages and 50% slab caches,
      	memory reclaim reduces the page cache down to just dirty pages
      	and slab caches fill the entirety of memory.
      
      	The LRU is largely full of dirty pages, and we're getting spikes
      	of random writeback from memory reclaim so it's all going to shit.
      	Behaviour never recovers, the page cache remains pinned at just
      	dirty pages, and nothing I could tune would make any difference.
      	vfs_cache_pressure makes no difference - I would set it so high
      	it should trim the entire inode caches in a single pass, yet it
      	didn't do anything. It was clear from tracing and live telemetry
      	that the shrinkers were pretty much not running except when
      	there was absolutely no memory free at all, and then they did
      	the minimum necessary to free memory to make progress.
      
      	So I went looking at the code, trying to find places where pages
      	got reclaimed and the shrinkers weren't called. There's only one
      	- kswapd doing boosted reclaim as per commit 1c30844d ("mm:
      	reclaim small amounts of memory when an external fragmentation
      	event occurs").
      
      The watermark boosting introduced by the commit is triggered in response
      to an allocation "fragmentation event".  The boosting was not intended
      to target THP specifically and triggers even if THP is disabled.
      However, with Dave's perfectly reasonable workload, fragmentation events
      can be very common given the ratio of slab to page cache allocations so
      boosting remains active for long periods of time.
      
      As high-order allocations might use compaction and compaction cannot
      move slab pages the decision was made in the commit to special-case
      kswapd when watermarks are boosted -- kswapd avoids reclaiming slab as
      reclaiming slab does not directly help compaction.
      
      As Dave notes, this decision means that slab can be artificially
      protected for long periods of time and messes up the balance with slab
      and page caches.
      
      Removing the special casing can still indirectly help avoid
      fragmentation by avoiding fragmentation-causing events due to slab
      allocation as pages from a slab pageblock will have some slab objects
      freed.  Furthermore, with the special casing, reclaim behaviour is
      unpredictable as kswapd sometimes examines slab and sometimes does not
      in a manner that is tricky to tune or analyse.
      
      This patch removes the special casing.  The downside is that this is not
      a universal performance win.  Some benchmarks that depend on the
      residency of data when rereading metadata may see a regression when slab
      reclaim is restored to its original behaviour.  Similarly, some
      benchmarks that only read-once or write-once may perform better when
      page reclaim is too aggressive.  The primary upside is that slab
      shrinker is less surprising (arguably more sane but that's a matter of
      opinion), behaves consistently regardless of the fragmentation state of
      the system and properly obeys VM sysctls.
      
      A fsmark benchmark configuration was constructed similar to what Dave
      reported and is codified by the mmtest configuration
      config-io-fsmark-small-file-stream.  It was evaluated on a 1-socket
      machine to avoid dealing with NUMA-related issues and the timing of
      reclaim.  The storage was an SSD Samsung Evo and a fresh trimmed XFS
      filesystem was used for the test data.
      
      This is not an exact replication of Dave's setup.  The configuration
      scales its parameters depending on the memory size of the SUT to behave
      similarly across machines.  The parameters mean the first sample
      reported by fs_mark is using 50% of RAM which will barely be throttled
      and look like a big outlier.  Dave used fake NUMA to have multiple
      kswapd instances which I didn't replicate.  Finally, the number of
      iterations differ from Dave's test as the target disk was not large
      enough.  While not identical, it should be representative.
      
        fsmark
                                           5.3.0-rc3              5.3.0-rc3
                                             vanilla          shrinker-v1r1
        Min       1-files/sec     4444.80 (   0.00%)     4765.60 (   7.22%)
        1st-qrtle 1-files/sec     5005.10 (   0.00%)     5091.70 (   1.73%)
        2nd-qrtle 1-files/sec     4917.80 (   0.00%)     4855.60 (  -1.26%)
        3rd-qrtle 1-files/sec     4667.40 (   0.00%)     4831.20 (   3.51%)
        Max-1     1-files/sec    11421.50 (   0.00%)     9999.30 ( -12.45%)
        Max-5     1-files/sec    11421.50 (   0.00%)     9999.30 ( -12.45%)
        Max-10    1-files/sec    11421.50 (   0.00%)     9999.30 ( -12.45%)
        Max-90    1-files/sec     4649.60 (   0.00%)     4780.70 (   2.82%)
        Max-95    1-files/sec     4491.00 (   0.00%)     4768.20 (   6.17%)
        Max-99    1-files/sec     4491.00 (   0.00%)     4768.20 (   6.17%)
        Max       1-files/sec    11421.50 (   0.00%)     9999.30 ( -12.45%)
        Hmean     1-files/sec     5004.75 (   0.00%)     5075.96 (   1.42%)
        Stddev    1-files/sec     1778.70 (   0.00%)     1369.66 (  23.00%)
        CoeffVar  1-files/sec       33.70 (   0.00%)       26.05 (  22.71%)
        BHmean-99 1-files/sec     5053.72 (   0.00%)     5101.52 (   0.95%)
        BHmean-95 1-files/sec     5053.72 (   0.00%)     5101.52 (   0.95%)
        BHmean-90 1-files/sec     5107.05 (   0.00%)     5131.41 (   0.48%)
        BHmean-75 1-files/sec     5208.45 (   0.00%)     5206.68 (  -0.03%)
        BHmean-50 1-files/sec     5405.53 (   0.00%)     5381.62 (  -0.44%)
        BHmean-25 1-files/sec     6179.75 (   0.00%)     6095.14 (  -1.37%)
      
                           5.3.0-rc3   5.3.0-rc3
                             vanillashrinker-v1r1
        Duration User         501.82      497.29
        Duration System      4401.44     4424.08
        Duration Elapsed     8124.76     8358.05
      
      This is showing a slight skew for the max result representing a large
      outlier for the 1st, 2nd and 3rd quartile are similar indicating that
      the bulk of the results show little difference.  Note that an earlier
      version of the fsmark configuration showed a regression but that
      included more samples taken while memory was still filling.
      
      Note that the elapsed time is higher.  Part of this is that the
      configuration included time to delete all the test files when the test
      completes -- the test automation handles the possibility of testing
      fsmark with multiple thread counts.  Without the patch, many of these
      objects would be memory resident which is part of what the patch is
      addressing.
      
      There are other important observations that justify the patch.
      
      1. With the vanilla kernel, the number of dirty pages in the system is
         very low for much of the test. With this patch, dirty pages is
         generally kept at 10% which matches vm.dirty_background_ratio which
         is normal expected historical behaviour.
      
      2. With the vanilla kernel, the ratio of Slab/Pagecache is close to
         0.95 for much of the test i.e. Slab is being left alone and
         dominating memory consumption. With the patch applied, the ratio
         varies between 0.35 and 0.45 with the bulk of the measured ratios
         roughly half way between those values. This is a different balance to
         what Dave reported but it was at least consistent.
      
      3. Slabs are scanned throughout the entire test with the patch applied.
         The vanille kernel has periods with no scan activity and then
         relatively massive spikes.
      
      4. Without the patch, kswapd scan rates are very variable. With the
         patch, the scan rates remain quite steady.
      
      4. Overall vmstats are closer to normal expectations
      
      	                                5.3.0-rc3      5.3.0-rc3
      	                                  vanilla  shrinker-v1r1
          Ops Direct pages scanned             99388.00      328410.00
          Ops Kswapd pages scanned          45382917.00    33451026.00
          Ops Kswapd pages reclaimed        30869570.00    25239655.00
          Ops Direct pages reclaimed           74131.00        5830.00
          Ops Kswapd efficiency %                 68.02          75.45
          Ops Kswapd velocity                   5585.75        4002.25
          Ops Page reclaim immediate         1179721.00      430927.00
          Ops Slabs scanned                 62367361.00    73581394.00
          Ops Direct inode steals               2103.00        1002.00
          Ops Kswapd inode steals             570180.00     5183206.00
      
      	o Vanilla kernel is hitting direct reclaim more frequently,
      	  not very much in absolute terms but the fact the patch
      	  reduces it is interesting
      	o "Page reclaim immediate" in the vanilla kernel indicates
      	  dirty pages are being encountered at the tail of the LRU.
      	  This is generally bad and means in this case that the LRU
      	  is not long enough for dirty pages to be cleaned by the
      	  background flush in time. This is much reduced by the
      	  patch.
      	o With the patch, kswapd is reclaiming 10 times more slab
      	  pages than with the vanilla kernel. This is indicative
      	  of the watermark boosting over-protecting slab
      
      A more complete set of tests were run that were part of the basis for
      introducing boosting and while there are some differences, they are well
      within tolerances.
      
      Bottom line, the special casing kswapd to avoid slab behaviour is
      unpredictable and can lead to abnormal results for normal workloads.
      
      This patch restores the expected behaviour that slab and page cache is
      balanced consistently for a workload with a steady allocation ratio of
      slab/pagecache pages.  It also means that if there are workloads that
      favour the preservation of slab over pagecache that it can be tuned via
      vm.vfs_cache_pressure where as the vanilla kernel effectively ignores
      the parameter when boosting is active.
      
      Link: http://lkml.kernel.org/r/20190808182946.GM2739@techsingularity.net
      Fixes: 1c30844d ("mm: reclaim small amounts of memory when an external fragmentation event occurs")
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Reviewed-by: default avatarDave Chinner <dchinner@redhat.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: <stable@vger.kernel.org>	[5.0+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      28360f39
    • Andrea Arcangeli's avatar
      Revert "mm, thp: restore node-local hugepage allocations" · a8282608
      Andrea Arcangeli authored
      This reverts commit 2f0799a0 ("mm, thp: restore node-local
      hugepage allocations").
      
      commit 2f0799a0 was rightfully applied to avoid the risk of a
      severe regression that was reported by the kernel test robot at the end
      of the merge window.  Now we understood the regression was a false
      positive and was caused by a significant increase in fairness during a
      swap trashing benchmark.  So it's safe to re-apply the fix and continue
      improving the code from there.  The benchmark that reported the
      regression is very useful, but it provides a meaningful result only when
      there is no significant alteration in fairness during the workload.  The
      removal of __GFP_THISNODE increased fairness.
      
      __GFP_THISNODE cannot be used in the generic page faults path for new
      memory allocations under the MPOL_DEFAULT mempolicy, or the allocation
      behavior significantly deviates from what the MPOL_DEFAULT semantics are
      supposed to be for THP and 4k allocations alike.
      
      Setting THP defrag to "always" or using MADV_HUGEPAGE (with THP defrag
      set to "madvise") has never meant to provide an implicit MPOL_BIND on
      the "current" node the task is running on, causing swap storms and
      providing a much more aggressive behavior than even zone_reclaim_node =
      3.
      
      Any workload who could have benefited from __GFP_THISNODE has now to
      enable zone_reclaim_mode=1||2||3.  __GFP_THISNODE implicitly provided
      the zone_reclaim_mode behavior, but it only did so if THP was enabled:
      if THP was disabled, there would have been no chance to get any 4k page
      from the current node if the current node was full of pagecache, which
      further shows how this __GFP_THISNODE was misplaced in MADV_HUGEPAGE.
      MADV_HUGEPAGE has never been intended to provide any zone_reclaim_mode
      semantics, in fact the two are orthogonal, zone_reclaim_mode = 1|2|3
      must work exactly the same with MADV_HUGEPAGE set or not.
      
      The performance characteristic of memory depends on the hardware
      details.  The numbers below are obtained on Naples/EPYC architecture and
      the N/A projection extends them to show what we should aim for in the
      future as a good THP NUMA locality default.  The benchmark used
      exercises random memory seeks (note: the cost of the page faults is not
      part of the measurement).
      
        D0 THP | D0 4k | D1 THP | D1 4k | D2 THP | D2 4k | D3 THP | D3 4k | ...
        0%     | +43%  | +45%   | +106% | +131%  | +224% | N/A    | N/A
      
      D0 means distance zero (i.e.  local memory), D1 means distance one (i.e.
      intra socket memory), D2 means distance two (i.e.  inter socket memory),
      etc...
      
      For the guest physical memory allocated by qemu and for guest mode
      kernel the performance characteristic of RAM is more complex and an
      ideal default could be:
      
        D0 THP | D1 THP | D0 4k | D2 THP | D1 4k | D3 THP | D2 4k | D3 4k | ...
        0%     | +58%   | +101% | N/A    | +222% | N/A    | N/A   | N/A
      
      NOTE: the N/A are projections and haven't been measured yet, the
      measurement in this case is done on a 1950x with only two NUMA nodes.
      The THP case here means THP was used both in the host and in the guest.
      
      After applying this commit the THP NUMA locality order that we'll get
      out of MADV_HUGEPAGE is this:
      
        D0 THP | D1 THP | D2 THP | D3 THP | ... | D0 4k | D1 4k | D2 4k | D3 4k | ...
      
      Before this commit it was:
      
        D0 THP | D0 4k | D1 4k | D2 4k | D3 4k | ...
      
      Even if we ignore the breakage of large workloads that can't fit in a
      single node that the __GFP_THISNODE implicit "current node" mbind
      caused, the THP NUMA locality order provided by __GFP_THISNODE was still
      not the one we shall aim for in the long term (i.e.  the first one at
      the top).
      
      After this commit is applied, we can introduce a new allocator multi
      order API and to replace those two alloc_pages_vmas calls in the page
      fault path, with a single multi order call:
      
              unsigned int order = (1 << HPAGE_PMD_ORDER) | (1 << 0);
              page = alloc_pages_multi_order(..., &order);
              if (!page)
              	goto out;
              if (!(order & (1 << 0))) {
              	VM_WARN_ON(order != 1 << HPAGE_PMD_ORDER);
              	/* THP fault */
              } else {
              	VM_WARN_ON(order != 1 << 0);
              	/* 4k fallback */
              }
      
      The page allocator logic has to be altered so that when it fails on any
      zone with order 9, it has to try again with a order 0 before falling
      back to the next zone in the zonelist.
      
      After that we need to do more measurements and evaluate if adding an
      opt-in feature for guest mode is worth it, to swap "DN 4k | DN+1 THP"
      with "DN+1 THP | DN 4k" at every NUMA distance crossing.
      
      Link: http://lkml.kernel.org/r/20190503223146.2312-3-aarcange@redhat.comSigned-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarMel Gorman <mgorman@suse.de>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Zi Yan <zi.yan@cs.rutgers.edu>
      Cc: Stefan Priebe - Profihost AG <s.priebe@profihost.ag>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a8282608
    • Andrea Arcangeli's avatar
      Revert "Revert "mm, thp: consolidate THP gfp handling into alloc_hugepage_direct_gfpmask"" · 92717d42
      Andrea Arcangeli authored
      Patch series "reapply: relax __GFP_THISNODE for MADV_HUGEPAGE mappings".
      
      The fixes for what was originally reported as "pathological THP
      behavior" we rightfully reverted to be sure not to introduced
      regressions at end of a merge window after a severe regression report
      from the kernel bot.  We can safely re-apply them now that we had time
      to analyze the problem.
      
      The mm process worked fine, because the good fixes were eventually
      committed upstream without excessive delay.
      
      The regression reported by the kernel bot however forced us to revert
      the good fixes to be sure not to introduce regressions and to give us
      the time to analyze the issue further.  The silver lining is that this
      extra time allowed to think more at this issue and also plan for a
      future direction to improve things further in terms of THP NUMA
      locality.
      
      This patch (of 2):
      
      This reverts commit 356ff8a9 ("Revert "mm, thp: consolidate THP
      gfp handling into alloc_hugepage_direct_gfpmask").  So it reapplies
      89c83fb5 ("mm, thp: consolidate THP gfp handling into
      alloc_hugepage_direct_gfpmask").
      
      Consolidation of the THP allocation flags at the same place was meant to
      be a clean up to easier handle otherwise scattered code which is
      imposing a maintenance burden.  There were no real problems observed
      with the gfp mask consolidation but the reversion was rushed through
      without a larger consensus regardless.
      
      This patch brings the consolidation back because this should make the
      long term maintainability easier as well as it should allow future
      changes to be less error prone.
      
      [mhocko@kernel.org: changelog additions]
      Link: http://lkml.kernel.org/r/20190503223146.2312-2-aarcange@redhat.comSigned-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Zi Yan <zi.yan@cs.rutgers.edu>
      Cc: Stefan Priebe - Profihost AG <s.priebe@profihost.ag>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      92717d42
    • Qian Cai's avatar
      include/asm-generic/5level-fixup.h: fix variable 'p4d' set but not used · 0cfaee2a
      Qian Cai authored
      A compiler throws a warning on an arm64 system since commit 9849a569
      ("arch, mm: convert all architectures to use 5level-fixup.h"),
      
        mm/kasan/init.c: In function 'kasan_free_p4d':
        mm/kasan/init.c:344:9: warning: variable 'p4d' set but not used [-Wunused-but-set-variable]
         p4d_t *p4d;
                ^~~
      
      because p4d_none() in "5level-fixup.h" is compiled away while it is a
      static inline function in "pgtable-nopud.h".
      
      However, if converted p4d_none() to a static inline there, powerpc would
      be unhappy as it reads those in assembler language in
      "arch/powerpc/include/asm/book3s/64/pgtable.h", so it needs to skip
      assembly include for the static inline C function.
      
      While at it, converted a few similar functions to be consistent with the
      ones in "pgtable-nopud.h".
      
      Link: http://lkml.kernel.org/r/20190806232917.881-1-cai@lca.pwSigned-off-by: default avatarQian Cai <cai@lca.pw>
      Acked-by: default avatarArnd Bergmann <arnd@arndb.de>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Jason Gunthorpe <jgg@ziepe.ca>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0cfaee2a
    • NeilBrown's avatar
      seq_file: fix problem when seeking mid-record · 6a2aeab5
      NeilBrown authored
      If you use lseek or similar (e.g.  pread) to access a location in a
      seq_file file that is within a record, rather than at a record boundary,
      then the first read will return the remainder of the record, and the
      second read will return the whole of that same record (instead of the
      next record).  When seeking to a record boundary, the next record is
      correctly returned.
      
      This bug was introduced by a recent patch (identified below).  Before
      that patch, seq_read() would increment m->index when the last of the
      buffer was returned (m->count == 0).  After that patch, we rely on
      ->next to increment m->index after filling the buffer - but there was
      one place where that didn't happen.
      
      Link: https://lkml.kernel.org/lkml/877e7xl029.fsf@notabene.neil.brown.name/
      Fixes: 1f4aace6 ("fs/seq_file.c: simplify seq_file iteration code and interface")
      Signed-off-by: default avatarNeilBrown <neilb@suse.com>
      Reported-by: default avatarSergei Turchanov <turchanov@farpost.com>
      Tested-by: default avatarSergei Turchanov <turchanov@farpost.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Markus Elfring <Markus.Elfring@web.de>
      Cc: <stable@vger.kernel.org>	[4.19+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6a2aeab5
    • Roman Gushchin's avatar
      mm: workingset: fix vmstat counters for shadow nodes · ec9f0238
      Roman Gushchin authored
      Memcg counters for shadow nodes are broken because the memcg pointer is
      obtained in a wrong way. The following approach is used:
              virt_to_page(xa_node)->mem_cgroup
      
      Since commit 4d96ba35 ("mm: memcg/slab: stop setting
      page->mem_cgroup pointer for slab pages") page->mem_cgroup pointer isn't
      set for slab pages, so memcg_from_slab_page() should be used instead.
      
      Also I doubt that it ever worked correctly: virt_to_head_page() should
      be used instead of virt_to_page().  Otherwise objects residing on tail
      pages are not accounted, because only the head page contains a valid
      mem_cgroup pointer.  That was a case since the introduction of these
      counters by the commit 68d48e6a ("mm: workingset: add vmstat counter
      for shadow nodes").
      
      Link: http://lkml.kernel.org/r/20190801233532.138743-1-guro@fb.com
      Fixes: 4d96ba35 ("mm: memcg/slab: stop setting page->mem_cgroup pointer for slab pages")
      Signed-off-by: default avatarRoman Gushchin <guro@fb.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ec9f0238
    • Isaac J. Manjarres's avatar
      mm/usercopy: use memory range to be accessed for wraparound check · 95153169
      Isaac J. Manjarres authored
      Currently, when checking to see if accessing n bytes starting at address
      "ptr" will cause a wraparound in the memory addresses, the check in
      check_bogus_address() adds an extra byte, which is incorrect, as the
      range of addresses that will be accessed is [ptr, ptr + (n - 1)].
      
      This can lead to incorrectly detecting a wraparound in the memory
      address, when trying to read 4 KB from memory that is mapped to the the
      last possible page in the virtual address space, when in fact, accessing
      that range of memory would not cause a wraparound to occur.
      
      Use the memory range that will actually be accessed when considering if
      accessing a certain amount of bytes will cause the memory address to
      wrap around.
      
      Link: http://lkml.kernel.org/r/1564509253-23287-1-git-send-email-isaacm@codeaurora.org
      Fixes: f5509cc1 ("mm: Hardened usercopy")
      Signed-off-by: default avatarPrasad Sodagudi <psodagud@codeaurora.org>
      Signed-off-by: default avatarIsaac J. Manjarres <isaacm@codeaurora.org>
      Co-developed-by: default avatarPrasad Sodagudi <psodagud@codeaurora.org>
      Reviewed-by: default avatarWilliam Kucharski <william.kucharski@oracle.com>
      Acked-by: default avatarKees Cook <keescook@chromium.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Trilok Soni <tsoni@codeaurora.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      95153169
    • Catalin Marinas's avatar
      mm: kmemleak: disable early logging in case of error · fcf3a5b6
      Catalin Marinas authored
      If an error occurs during kmemleak_init() (e.g.  kmem cache cannot be
      created), kmemleak is disabled but kmemleak_early_log remains enabled.
      Subsequently, when the .init.text section is freed, the log_early()
      function no longer exists.  To avoid a page fault in such scenario,
      ensure that kmemleak_disable() also disables early logging.
      
      Link: http://lkml.kernel.org/r/20190731152302.42073-1-catalin.marinas@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: default avatarQian Cai <cai@lca.pw>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fcf3a5b6
    • Kuppuswamy Sathyanarayanan's avatar
      mm/vmalloc.c: fix percpu free VM area search criteria · 5336e52c
      Kuppuswamy Sathyanarayanan authored
      Recent changes to the vmalloc code by commit 68ad4a33
      ("mm/vmalloc.c: keep track of free blocks for vmap allocation") can
      cause spurious percpu allocation failures.  These, in turn, can result
      in panic()s in the slub code.  One such possible panic was reported by
      Dave Hansen in following link https://lkml.org/lkml/2019/6/19/939.
      Another related panic observed is,
      
       RIP: 0033:0x7f46f7441b9b
       Call Trace:
        dump_stack+0x61/0x80
        pcpu_alloc.cold.30+0x22/0x4f
        mem_cgroup_css_alloc+0x110/0x650
        cgroup_apply_control_enable+0x133/0x330
        cgroup_mkdir+0x41b/0x500
        kernfs_iop_mkdir+0x5a/0x90
        vfs_mkdir+0x102/0x1b0
        do_mkdirat+0x7d/0xf0
        do_syscall_64+0x5b/0x180
        entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      VMALLOC memory manager divides the entire VMALLOC space (VMALLOC_START
      to VMALLOC_END) into multiple VM areas (struct vm_areas), and it mainly
      uses two lists (vmap_area_list & free_vmap_area_list) to track the used
      and free VM areas in VMALLOC space.  And pcpu_get_vm_areas(offsets[],
      sizes[], nr_vms, align) function is used for allocating congruent VM
      areas for percpu memory allocator.  In order to not conflict with
      VMALLOC users, pcpu_get_vm_areas allocates VM areas near the end of the
      VMALLOC space.  So the search for free vm_area for the given requirement
      starts near VMALLOC_END and moves upwards towards VMALLOC_START.
      
      Prior to commit 68ad4a33, the search for free vm_area in
      pcpu_get_vm_areas() involves following two main steps.
      
      Step 1:
          Find a aligned "base" adress near VMALLOC_END.
          va = free vm area near VMALLOC_END
      Step 2:
          Loop through number of requested vm_areas and check,
              Step 2.1:
                 if (base < VMALLOC_START)
                    1. fail with error
              Step 2.2:
                 // end is offsets[area] + sizes[area]
                 if (base + end > va->vm_end)
                     1. Move the base downwards and repeat Step 2
              Step 2.3:
                 if (base + start < va->vm_start)
                    1. Move to previous free vm_area node, find aligned
                       base address and repeat Step 2
      
      But Commit 68ad4a33 removed Step 2.2 and modified Step 2.3 as below:
      
              Step 2.3:
                 if (base + start < va->vm_start || base + end > va->vm_end)
                    1. Move to previous free vm_area node, find aligned
                       base address and repeat Step 2
      
      Above change is the root cause of spurious percpu memory allocation
      failures.  For example, consider a case where a relatively large vm_area
      (~ 30 TB) was ignored in free vm_area search because it did not pass the
      base + end < vm->vm_end boundary check.  Ignoring such large free
      vm_area's would lead to not finding free vm_area within boundary of
      VMALLOC_start to VMALLOC_END which in turn leads to allocation failures.
      
      So modify the search algorithm to include Step 2.2.
      
      Link: http://lkml.kernel.org/r/20190729232139.91131-1-sathyanarayanan.kuppuswamy@linux.intel.com
      Fixes: 68ad4a33 ("mm/vmalloc.c: keep track of free blocks for vmap allocation")
      Signed-off-by: default avatarKuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
      Reported-by: default avatarDave Hansen <dave.hansen@intel.com>
      Acked-by: default avatarDennis Zhou <dennis@kernel.org>
      Reviewed-by: default avatarUladzislau Rezki (Sony) <urezki@gmail.com>
      Cc: Roman Gushchin <guro@fb.com>
      Cc: sathyanarayanan kuppuswamy <sathyanarayanan.kuppuswamy@linux.intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5336e52c
    • Miles Chen's avatar
      mm/memcontrol.c: fix use after free in mem_cgroup_iter() · 54a83d6b
      Miles Chen authored
      This patch is sent to report an use after free in mem_cgroup_iter()
      after merging commit be265775 ("mm: memcg: fix use after free in
      mem_cgroup_iter()").
      
      I work with android kernel tree (4.9 & 4.14), and commit be265775
      ("mm: memcg: fix use after free in mem_cgroup_iter()") has been merged
      to the trees.  However, I can still observe use after free issues
      addressed in the commit be265775.  (on low-end devices, a few times
      this month)
      
      backtrace:
              css_tryget <- crash here
              mem_cgroup_iter
              shrink_node
              shrink_zones
              do_try_to_free_pages
              try_to_free_pages
              __perform_reclaim
              __alloc_pages_direct_reclaim
              __alloc_pages_slowpath
              __alloc_pages_nodemask
      
      To debug, I poisoned mem_cgroup before freeing it:
      
        static void __mem_cgroup_free(struct mem_cgroup *memcg)
              for_each_node(node)
              free_mem_cgroup_per_node_info(memcg, node);
              free_percpu(memcg->stat);
        +     /* poison memcg before freeing it */
        +     memset(memcg, 0x78, sizeof(struct mem_cgroup));
              kfree(memcg);
        }
      
      The coredump shows the position=0xdbbc2a00 is freed.
      
        (gdb) p/x ((struct mem_cgroup_per_node *)0xe5009e00)->iter[8]
        $13 = {position = 0xdbbc2a00, generation = 0x2efd}
      
        0xdbbc2a00:     0xdbbc2e00      0x00000000      0xdbbc2800      0x00000100
        0xdbbc2a10:     0x00000200      0x78787878      0x00026218      0x00000000
        0xdbbc2a20:     0xdcad6000      0x00000001      0x78787800      0x00000000
        0xdbbc2a30:     0x78780000      0x00000000      0x0068fb84      0x78787878
        0xdbbc2a40:     0x78787878      0x78787878      0x78787878      0xe3fa5cc0
        0xdbbc2a50:     0x78787878      0x78787878      0x00000000      0x00000000
        0xdbbc2a60:     0x00000000      0x00000000      0x00000000      0x00000000
        0xdbbc2a70:     0x00000000      0x00000000      0x00000000      0x00000000
        0xdbbc2a80:     0x00000000      0x00000000      0x00000000      0x00000000
        0xdbbc2a90:     0x00000001      0x00000000      0x00000000      0x00100000
        0xdbbc2aa0:     0x00000001      0xdbbc2ac8      0x00000000      0x00000000
        0xdbbc2ab0:     0x00000000      0x00000000      0x00000000      0x00000000
        0xdbbc2ac0:     0x00000000      0x00000000      0xe5b02618      0x00001000
        0xdbbc2ad0:     0x00000000      0x78787878      0x78787878      0x78787878
        0xdbbc2ae0:     0x78787878      0x78787878      0x78787878      0x78787878
        0xdbbc2af0:     0x78787878      0x78787878      0x78787878      0x78787878
        0xdbbc2b00:     0x78787878      0x78787878      0x78787878      0x78787878
        0xdbbc2b10:     0x78787878      0x78787878      0x78787878      0x78787878
        0xdbbc2b20:     0x78787878      0x78787878      0x78787878      0x78787878
        0xdbbc2b30:     0x78787878      0x78787878      0x78787878      0x78787878
        0xdbbc2b40:     0x78787878      0x78787878      0x78787878      0x78787878
        0xdbbc2b50:     0x78787878      0x78787878      0x78787878      0x78787878
        0xdbbc2b60:     0x78787878      0x78787878      0x78787878      0x78787878
        0xdbbc2b70:     0x78787878      0x78787878      0x78787878      0x78787878
        0xdbbc2b80:     0x78787878      0x78787878      0x00000000      0x78787878
        0xdbbc2b90:     0x78787878      0x78787878      0x78787878      0x78787878
        0xdbbc2ba0:     0x78787878      0x78787878      0x78787878      0x78787878
      
      In the reclaim path, try_to_free_pages() does not setup
      sc.target_mem_cgroup and sc is passed to do_try_to_free_pages(), ...,
      shrink_node().
      
      In mem_cgroup_iter(), root is set to root_mem_cgroup because
      sc->target_mem_cgroup is NULL.  It is possible to assign a memcg to
      root_mem_cgroup.nodeinfo.iter in mem_cgroup_iter().
      
              try_to_free_pages
              	struct scan_control sc = {...}, target_mem_cgroup is 0x0;
              do_try_to_free_pages
              shrink_zones
              shrink_node
              	 mem_cgroup *root = sc->target_mem_cgroup;
              	 memcg = mem_cgroup_iter(root, NULL, &reclaim);
              mem_cgroup_iter()
              	if (!root)
              		root = root_mem_cgroup;
              	...
      
              	css = css_next_descendant_pre(css, &root->css);
              	memcg = mem_cgroup_from_css(css);
              	cmpxchg(&iter->position, pos, memcg);
      
      My device uses memcg non-hierarchical mode.  When we release a memcg:
      invalidate_reclaim_iterators() reaches only dead_memcg and its parents.
      If non-hierarchical mode is used, invalidate_reclaim_iterators() never
      reaches root_mem_cgroup.
      
        static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg)
        {
              struct mem_cgroup *memcg = dead_memcg;
      
              for (; memcg; memcg = parent_mem_cgroup(memcg)
              ...
        }
      
      So the use after free scenario looks like:
      
        CPU1						CPU2
      
        try_to_free_pages
        do_try_to_free_pages
        shrink_zones
        shrink_node
        mem_cgroup_iter()
            if (!root)
            	root = root_mem_cgroup;
            ...
            css = css_next_descendant_pre(css, &root->css);
            memcg = mem_cgroup_from_css(css);
            cmpxchg(&iter->position, pos, memcg);
      
              				invalidate_reclaim_iterators(memcg);
              				...
              				__mem_cgroup_free()
              					kfree(memcg);
      
        try_to_free_pages
        do_try_to_free_pages
        shrink_zones
        shrink_node
        mem_cgroup_iter()
            if (!root)
            	root = root_mem_cgroup;
            ...
            mz = mem_cgroup_nodeinfo(root, reclaim->pgdat->node_id);
            iter = &mz->iter[reclaim->priority];
            pos = READ_ONCE(iter->position);
            css_tryget(&pos->css) <- use after free
      
      To avoid this, we should also invalidate root_mem_cgroup.nodeinfo.iter
      in invalidate_reclaim_iterators().
      
      [cai@lca.pw: fix -Wparentheses compilation warning]
        Link: http://lkml.kernel.org/r/1564580753-17531-1-git-send-email-cai@lca.pw
      Link: http://lkml.kernel.org/r/20190730015729.4406-1-miles.chen@mediatek.com
      Fixes: 5ac8fb31 ("mm: memcontrol: convert reclaim iterator to simple css refcounting")
      Signed-off-by: default avatarMiles Chen <miles.chen@mediatek.com>
      Signed-off-by: default avatarQian Cai <cai@lca.pw>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      54a83d6b