1. 14 Oct, 2020 40 commits
    • Linus Torvalds's avatar
      Merge tag 'hyperv-next-signed' of git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux · 4907a43d
      Linus Torvalds authored
      Pull Hyper-V updates from Wei Liu:
      
       - a series from Boqun Feng to support page size larger than 4K
      
       - a few miscellaneous clean-ups
      
      * tag 'hyperv-next-signed' of git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux:
        hv: clocksource: Add notrace attribute to read_hv_sched_clock_*() functions
        x86/hyperv: Remove aliases with X64 in their name
        PCI: hv: Document missing hv_pci_protocol_negotiation() parameter
        scsi: storvsc: Support PAGE_SIZE larger than 4K
        Driver: hv: util: Use VMBUS_RING_SIZE() for ringbuffer sizes
        HID: hyperv: Use VMBUS_RING_SIZE() for ringbuffer sizes
        Input: hyperv-keyboard: Use VMBUS_RING_SIZE() for ringbuffer sizes
        hv_netvsc: Use HV_HYP_PAGE_SIZE for Hyper-V communication
        hv: hyperv.h: Introduce some hvpfn helper functions
        Drivers: hv: vmbus: Move virt_to_hvpfn() to hyperv header
        Drivers: hv: Use HV_HYP_PAGE in hv_synic_enable_regs()
        Drivers: hv: vmbus: Introduce types of GPADL
        Drivers: hv: vmbus: Move __vmbus_open()
        Drivers: hv: vmbus: Always use HV_HYP_PAGE_SIZE for gpadl
        drivers: hv: remove cast from hyperv_die_event
      4907a43d
    • Linus Torvalds's avatar
      Merge tag 'x86_seves_for_v5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip · da9803df
      Linus Torvalds authored
      Pull x86 SEV-ES support from Borislav Petkov:
       "SEV-ES enhances the current guest memory encryption support called SEV
        by also encrypting the guest register state, making the registers
        inaccessible to the hypervisor by en-/decrypting them on world
        switches. Thus, it adds additional protection to Linux guests against
        exfiltration, control flow and rollback attacks.
      
        With SEV-ES, the guest is in full control of what registers the
        hypervisor can access. This is provided by a guest-host exchange
        mechanism based on a new exception vector called VMM Communication
        Exception (#VC), a new instruction called VMGEXIT and a shared
        Guest-Host Communication Block which is a decrypted page shared
        between the guest and the hypervisor.
      
        Intercepts to the hypervisor become #VC exceptions in an SEV-ES guest
        so in order for that exception mechanism to work, the early x86 init
        code needed to be made able to handle exceptions, which, in itself,
        brings a bunch of very nice cleanups and improvements to the early
        boot code like an early page fault handler, allowing for on-demand
        building of the identity mapping. With that, !KASLR configurations do
        not use the EFI page table anymore but switch to a kernel-controlled
        one.
      
        The main part of this series adds the support for that new exchange
        mechanism. The goal has been to keep this as much as possibly separate
        from the core x86 code by concentrating the machinery in two
        SEV-ES-specific files:
      
          arch/x86/kernel/sev-es-shared.c
          arch/x86/kernel/sev-es.c
      
        Other interaction with core x86 code has been kept at minimum and
        behind static keys to minimize the performance impact on !SEV-ES
        setups.
      
        Work by Joerg Roedel and Thomas Lendacky and others"
      
      * tag 'x86_seves_for_v5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (73 commits)
        x86/sev-es: Use GHCB accessor for setting the MMIO scratch buffer
        x86/sev-es: Check required CPU features for SEV-ES
        x86/efi: Add GHCB mappings when SEV-ES is active
        x86/sev-es: Handle NMI State
        x86/sev-es: Support CPU offline/online
        x86/head/64: Don't call verify_cpu() on starting APs
        x86/smpboot: Load TSS and getcpu GDT entry before loading IDT
        x86/realmode: Setup AP jump table
        x86/realmode: Add SEV-ES specific trampoline entry point
        x86/vmware: Add VMware-specific handling for VMMCALL under SEV-ES
        x86/kvm: Add KVM-specific VMMCALL handling under SEV-ES
        x86/paravirt: Allow hypervisor-specific VMMCALL handling under SEV-ES
        x86/sev-es: Handle #DB Events
        x86/sev-es: Handle #AC Events
        x86/sev-es: Handle VMMCALL Events
        x86/sev-es: Handle MWAIT/MWAITX Events
        x86/sev-es: Handle MONITOR/MONITORX Events
        x86/sev-es: Handle INVD Events
        x86/sev-es: Handle RDPMC Events
        x86/sev-es: Handle RDTSC(P) Events
        ...
      da9803df
    • Linus Torvalds's avatar
      Merge tag 'objtool-core-2020-10-13' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip · 6873139e
      Linus Torvalds authored
      Pull objtool updates from Ingo Molnar:
       "Most of the changes are cleanups and reorganization to make the
        objtool code more arch-agnostic. This is in preparation for non-x86
        support.
      
        Other changes:
      
         - KASAN fixes
      
         - Handle unreachable trap after call to noreturn functions better
      
         - Ignore unreachable fake jumps
      
         - Misc smaller fixes & cleanups"
      
      * tag 'objtool-core-2020-10-13' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (21 commits)
        perf build: Allow nested externs to enable BUILD_BUG() usage
        objtool: Allow nested externs to enable BUILD_BUG()
        objtool: Permit __kasan_check_{read,write} under UACCESS
        objtool: Ignore unreachable trap after call to noreturn functions
        objtool: Handle calling non-function symbols in other sections
        objtool: Ignore unreachable fake jumps
        objtool: Remove useless tests before save_reg()
        objtool: Decode unwind hint register depending on architecture
        objtool: Make unwind hint definitions available to other architectures
        objtool: Only include valid definitions depending on source file type
        objtool: Rename frame.h -> objtool.h
        objtool: Refactor jump table code to support other architectures
        objtool: Make relocation in alternative handling arch dependent
        objtool: Abstract alternative special case handling
        objtool: Move macros describing structures to arch-dependent code
        objtool: Make sync-check consider the target architecture
        objtool: Group headers to check in a single list
        objtool: Define 'struct orc_entry' only when needed
        objtool: Skip ORC entry creation for non-text sections
        objtool: Move ORC logic out of check()
        ...
      6873139e
    • Linus Torvalds's avatar
      Merge branch 'akpm' (patches from Andrew) · d5660df4
      Linus Torvalds authored
      Merge misc updates from Andrew Morton:
       "181 patches.
      
        Subsystems affected by this patch series: kbuild, scripts, ntfs,
        ocfs2, vfs, mm (slab, slub, kmemleak, dax, debug, pagecache, fadvise,
        gup, swap, memremap, memcg, selftests, pagemap, mincore, hmm, dma,
        memory-failure, vmallo and migration)"
      
      * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (181 commits)
        mm/migrate: remove obsolete comment about device public
        mm/migrate: remove cpages-- in migrate_vma_finalize()
        mm, oom_adj: don't loop through tasks in __set_oom_adj when not necessary
        memblock: use separate iterators for memory and reserved regions
        memblock: implement for_each_reserved_mem_region() using __next_mem_region()
        memblock: remove unused memblock_mem_size()
        x86/setup: simplify reserve_crashkernel()
        x86/setup: simplify initrd relocation and reservation
        arch, drivers: replace for_each_membock() with for_each_mem_range()
        arch, mm: replace for_each_memblock() with for_each_mem_pfn_range()
        memblock: reduce number of parameters in for_each_mem_range()
        memblock: make memblock_debug and related functionality private
        memblock: make for_each_memblock_type() iterator private
        mircoblaze: drop unneeded NUMA and sparsemem initializations
        riscv: drop unneeded node initialization
        h8300, nds32, openrisc: simplify detection of memory extents
        arm64: numa: simplify dummy_numa_init()
        arm, xtensa: simplify initialization of high memory pages
        dma-contiguous: simplify cma_early_percent_memory()
        KVM: PPC: Book3S HV: simplify kvm_cma_reserve()
        ...
      d5660df4
    • Ralph Campbell's avatar
      mm/migrate: remove obsolete comment about device public · f1f4f3ab
      Ralph Campbell authored
      Device public memory never had an in tree consumer and was removed in
      commit 25b2995a ("mm: remove MEMORY_DEVICE_PUBLIC support").  Delete
      the obsolete comment.
      Signed-off-by: default avatarRalph Campbell <rcampbell@nvidia.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarJason Gunthorpe <jgg@nvidia.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20200827190735.12752-2-rcampbell@nvidia.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f1f4f3ab
    • Ralph Campbell's avatar
      mm/migrate: remove cpages-- in migrate_vma_finalize() · 42578891
      Ralph Campbell authored
      The variable struct migrate_vma->cpages is only used in
      migrate_vma_setup().  There is no need to decrement it in
      migrate_vma_finalize() since it is never checked.
      Signed-off-by: default avatarRalph Campbell <rcampbell@nvidia.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Jason Gunthorpe <jgg@nvidia.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20200827190735.12752-1-rcampbell@nvidia.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      42578891
    • Suren Baghdasaryan's avatar
      mm, oom_adj: don't loop through tasks in __set_oom_adj when not necessary · 67197a4f
      Suren Baghdasaryan authored
      Currently __set_oom_adj loops through all processes in the system to keep
      oom_score_adj and oom_score_adj_min in sync between processes sharing
      their mm.  This is done for any task with more that one mm_users, which
      includes processes with multiple threads (sharing mm and signals).
      However for such processes the loop is unnecessary because their signal
      structure is shared as well.
      
      Android updates oom_score_adj whenever a tasks changes its role
      (background/foreground/...) or binds to/unbinds from a service, making it
      more/less important.  Such operation can happen frequently.  We noticed
      that updates to oom_score_adj became more expensive and after further
      investigation found out that the patch mentioned in "Fixes" introduced a
      regression.  Using Pixel 4 with a typical Android workload, write time to
      oom_score_adj increased from ~3.57us to ~362us.  Moreover this regression
      linearly depends on the number of multi-threaded processes running on the
      system.
      
      Mark the mm with a new MMF_MULTIPROCESS flag bit when task is created with
      (CLONE_VM && !CLONE_THREAD && !CLONE_VFORK).  Change __set_oom_adj to use
      MMF_MULTIPROCESS instead of mm_users to decide whether oom_score_adj
      update should be synchronized between multiple processes.  To prevent
      races between clone() and __set_oom_adj(), when oom_score_adj of the
      process being cloned might be modified from userspace, we use
      oom_adj_mutex.  Its scope is changed to global.
      
      The combination of (CLONE_VM && !CLONE_THREAD) is rarely used except for
      the case of vfork().  To prevent performance regressions of vfork(), we
      skip taking oom_adj_mutex and setting MMF_MULTIPROCESS when CLONE_VFORK is
      specified.  Clearing the MMF_MULTIPROCESS flag (when the last process
      sharing the mm exits) is left out of this patch to keep it simple and
      because it is believed that this threading model is rare.  Should there
      ever be a need for optimizing that case as well, it can be done by hooking
      into the exit path, likely following the mm_update_next_owner pattern.
      
      With the combination of (CLONE_VM && !CLONE_THREAD && !CLONE_VFORK) being
      quite rare, the regression is gone after the change is applied.
      
      [surenb@google.com: v3]
        Link: https://lkml.kernel.org/r/20200902012558.2335613-1-surenb@google.com
      
      Fixes: 44a70ade ("mm, oom_adj: make sure processes sharing mm have same view of oom_score_adj")
      Reported-by: default avatarTim Murray <timmurray@google.com>
      Suggested-by: default avatarMichal Hocko <mhocko@kernel.org>
      Signed-off-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Acked-by: default avatarChristian Brauner <christian.brauner@ubuntu.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarOleg Nesterov <oleg@redhat.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Eugene Syromiatnikov <esyr@redhat.com>
      Cc: Christian Kellner <christian@kellner.me>
      Cc: Adrian Reber <areber@redhat.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Aleksa Sarai <cyphar@cyphar.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Alexey Gladkov <gladkov.alexey@gmail.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
      Cc: Andrei Vagin <avagin@gmail.com>
      Cc: Bernd Edlinger <bernd.edlinger@hotmail.de>
      Cc: John Johansen <john.johansen@canonical.com>
      Cc: Yafang Shao <laoar.shao@gmail.com>
      Link: https://lkml.kernel.org/r/20200824153036.3201505-1-surenb@google.comDebugged-by: default avatarMinchan Kim <minchan@kernel.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      67197a4f
    • Mike Rapoport's avatar
      memblock: use separate iterators for memory and reserved regions · cc6de168
      Mike Rapoport authored
      for_each_memblock() is used to iterate over memblock.memory in a few
      places that use data from memblock_region rather than the memory ranges.
      
      Introduce separate for_each_mem_region() and
      for_each_reserved_mem_region() to improve encapsulation of memblock
      internals from its users.
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarBaoquan He <bhe@redhat.com>
      Acked-by: Ingo Molnar <mingo@kernel.org>			[x86]
      Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>	[MIPS]
      Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>	[.clang-format]
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20200818151634.14343-18-rppt@kernel.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cc6de168
    • Mike Rapoport's avatar
      memblock: implement for_each_reserved_mem_region() using __next_mem_region() · 9f3d5eaa
      Mike Rapoport authored
      Iteration over memblock.reserved with for_each_reserved_mem_region() used
      __next_reserved_mem_region() that implemented a subset of
      __next_mem_region().
      
      Use __for_each_mem_range() and, essentially, __next_mem_region() with
      appropriate parameters to reduce code duplication.
      
      While on it, rename for_each_reserved_mem_region() to
      for_each_reserved_mem_range() for consistency.
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>	[.clang-format]
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20200818151634.14343-17-rppt@kernel.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9f3d5eaa
    • Mike Rapoport's avatar
      memblock: remove unused memblock_mem_size() · 5bd0960b
      Mike Rapoport authored
      The only user of memblock_mem_size() was x86 setup code, it is gone now
      and memblock_mem_size() funciton can be removed.
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarBaoquan He <bhe@redhat.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20200818151634.14343-16-rppt@kernel.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5bd0960b
    • Mike Rapoport's avatar
      x86/setup: simplify reserve_crashkernel() · 6120cdc0
      Mike Rapoport authored
      * Replace magic numbers with defines
      * Replace memblock_find_in_range() + memblock_reserve() with
        memblock_phys_alloc_range()
      * Stop checking for low memory size in reserve_crashkernel_low(). The
        allocation from limited range will anyway fail if there is no enough
        memory, so there is no need for extra traversal of memblock.memory
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarBaoquan He <bhe@redhat.com>
      Acked-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20200818151634.14343-15-rppt@kernel.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6120cdc0
    • Mike Rapoport's avatar
      x86/setup: simplify initrd relocation and reservation · 3c45ee6d
      Mike Rapoport authored
      Currently, initrd image is reserved very early during setup and then it
      might be relocated and re-reserved after the initial physical memory
      mapping is created.  The "late" reservation of memblock verifies that
      mapped memory size exceeds the size of initrd, then checks whether the
      relocation required and, if yes, relocates inirtd to a new memory
      allocated from memblock and frees the old location.
      
      The check for memory size is excessive as memblock allocation will anyway
      fail if there is not enough memory.  Besides, there is no point to
      allocate memory from memblock using memblock_find_in_range() +
      memblock_reserve() when there exists memblock_phys_alloc_range() with
      required functionality.
      
      Remove the redundant check and simplify memblock allocation.
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarBaoquan He <bhe@redhat.com>
      Acked-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20200818151634.14343-14-rppt@kernel.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3c45ee6d
    • Mike Rapoport's avatar
      arch, drivers: replace for_each_membock() with for_each_mem_range() · b10d6bca
      Mike Rapoport authored
      There are several occurrences of the following pattern:
      
      	for_each_memblock(memory, reg) {
      		start = __pfn_to_phys(memblock_region_memory_base_pfn(reg);
      		end = __pfn_to_phys(memblock_region_memory_end_pfn(reg));
      
      		/* do something with start and end */
      	}
      
      Using for_each_mem_range() iterator is more appropriate in such cases and
      allows simpler and cleaner code.
      
      [akpm@linux-foundation.org: fix arch/arm/mm/pmsa-v7.c build]
      [rppt@linux.ibm.com: mips: fix cavium-octeon build caused by memblock refactoring]
        Link: http://lkml.kernel.org/r/20200827124549.GD167163@linux.ibm.comSigned-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20200818151634.14343-13-rppt@kernel.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b10d6bca
    • Mike Rapoport's avatar
      arch, mm: replace for_each_memblock() with for_each_mem_pfn_range() · c9118e6c
      Mike Rapoport authored
      There are several occurrences of the following pattern:
      
      	for_each_memblock(memory, reg) {
      		start_pfn = memblock_region_memory_base_pfn(reg);
      		end_pfn = memblock_region_memory_end_pfn(reg);
      
      		/* do something with start_pfn and end_pfn */
      	}
      
      Rather than iterate over all memblock.memory regions and each time query
      for their start and end PFNs, use for_each_mem_pfn_range() iterator to get
      simpler and clearer code.
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarBaoquan He <bhe@redhat.com>
      Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>	[.clang-format]
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20200818151634.14343-12-rppt@kernel.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c9118e6c
    • Mike Rapoport's avatar
      memblock: reduce number of parameters in for_each_mem_range() · 6e245ad4
      Mike Rapoport authored
      Currently for_each_mem_range() and for_each_mem_range_rev() iterators are
      the most generic way to traverse memblock regions.  As such, they have 8
      parameters and they are hardly convenient to users.  Most users choose to
      utilize one of their wrappers and the only user that actually needs most
      of the parameters is memblock itself.
      
      To avoid yet another naming for memblock iterators, rename the existing
      for_each_mem_range[_rev]() to __for_each_mem_range[_rev]() and add a new
      for_each_mem_range[_rev]() wrappers with only index, start and end
      parameters.
      
      The new wrapper nicely fits into init_unavailable_mem() and will be used
      in upcoming changes to simplify memblock traversals.
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>	[MIPS]
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20200818151634.14343-11-rppt@kernel.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6e245ad4
    • Mike Rapoport's avatar
      memblock: make memblock_debug and related functionality private · 87c55870
      Mike Rapoport authored
      The only user of memblock_dbg() outside memblock was s390 setup code and
      it is converted to use pr_debug() instead.  This allows to stop exposing
      memblock_debug and memblock_dbg() to the rest of the kernel.
      
      [akpm@linux-foundation.org: make memblock_dbg() safer and neater]
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarBaoquan He <bhe@redhat.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20200818151634.14343-10-rppt@kernel.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      87c55870
    • Mike Rapoport's avatar
      memblock: make for_each_memblock_type() iterator private · cd991db8
      Mike Rapoport authored
      for_each_memblock_type() is not used outside mm/memblock.c, move it there
      from include/linux/memblock.h
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarBaoquan He <bhe@redhat.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20200818151634.14343-9-rppt@kernel.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cd991db8
    • Mike Rapoport's avatar
      mircoblaze: drop unneeded NUMA and sparsemem initializations · 49645793
      Mike Rapoport authored
      microblaze does not support neither NUMA not SPARSMEM, so there is no
      point to call memblock_set_node() and
      sparse_memory_present_with_active_regions() functions during microblaze
      memory initialization.
      
      Remove these calls and the surrounding code.
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20200818151634.14343-8-rppt@kernel.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      49645793
    • Mike Rapoport's avatar
      riscv: drop unneeded node initialization · c8e47018
      Mike Rapoport authored
      RISC-V does not (yet) support NUMA and for UMA architectures node 0 is
      used implicitly during early memory initialization.
      
      There is no need to call memblock_set_node(), remove this call and the
      surrounding code.
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20200818151634.14343-7-rppt@kernel.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c8e47018
    • Mike Rapoport's avatar
      h8300, nds32, openrisc: simplify detection of memory extents · 80c45744
      Mike Rapoport authored
      Instead of traversing memblock.memory regions to find memory_start and
      memory_end, simply query memblock_{start,end}_of_DRAM().
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Acked-by: default avatarStafford Horne <shorne@gmail.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20200818151634.14343-6-rppt@kernel.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      80c45744
    • Mike Rapoport's avatar
      arm64: numa: simplify dummy_numa_init() · ab8f21aa
      Mike Rapoport authored
      dummy_numa_init() loops over memblock.memory and passes nid=0 to
      numa_add_memblk() which essentially wraps memblock_set_node().  However,
      memblock_set_node() can cope with entire memory span itself, so the loop
      over memblock.memory regions is redundant.
      
      Using a single call to memblock_set_node() rather than a loop also fixes
      an issue with a buggy ACPI firmware in which the SRAT table covers some
      but not all of the memory in the EFI memory map.
      
      Jonathan Cameron says:
      
        This issue can be easily triggered by having an SRAT table which fails
        to cover all elements of the EFI memory map.
      
        This firmware error is detected and a warning printed. e.g.
        "NUMA: Warning: invalid memblk node 64 [mem 0x240000000-0x27fffffff]"
        At that point we fall back to dummy_numa_init().
      
        However, the failed ACPI init has left us with our memblocks all broken
        up as we split them when trying to assign them to NUMA nodes.
      
        We then iterate over the memblocks and add them to node 0.
      
        numa_add_memblk() calls memblock_set_node() which merges regions that
        were previously split up during the earlier attempt to add them to
        different nodes during parsing of SRAT.
      
        This means elements are moved in the memblock array and we can end up
        in a different memblock after the call to numa_add_memblk().
        Result is:
      
        Unable to handle kernel paging request at virtual address 0000000000003a40
        Mem abort info:
          ESR = 0x96000004
          EC = 0x25: DABT (current EL), IL = 32 bits
          SET = 0, FnV = 0
          EA = 0, S1PTW = 0
        Data abort info:
          ISV = 0, ISS = 0x00000004
          CM = 0, WnR = 0
        [0000000000003a40] user address but active_mm is swapper
        Internal error: Oops: 96000004 [#1] PREEMPT SMP
      
        ...
      
        Call trace:
          sparse_init_nid+0x5c/0x2b0
          sparse_init+0x138/0x170
          bootmem_init+0x80/0xe0
          setup_arch+0x2a0/0x5fc
          start_kernel+0x8c/0x648
      
      Replace the loop with a single call to memblock_set_node() to the entire
      memory.
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Acked-by: default avatarJonathan Cameron <Jonathan.Cameron@huawei.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20200818151634.14343-5-rppt@kernel.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ab8f21aa
    • Mike Rapoport's avatar
      arm, xtensa: simplify initialization of high memory pages · cddb5ddf
      Mike Rapoport authored
      free_highpages() in both arm and xtensa essentially open-code
      for_each_free_mem_range() loop to detect high memory pages that were not
      reserved and that should be initialized and passed to the buddy allocator.
      
      Replace open-coded implementation of for_each_free_mem_range() with usage
      of memblock API to simplify the code.
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Tested-by: Max Filippov <jcmvbkbc@gmail.com>	[xtensa]
      Reviewed-by: Max Filippov <jcmvbkbc@gmail.com>	[xtensa]
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20200818151634.14343-4-rppt@kernel.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cddb5ddf
    • Mike Rapoport's avatar
      dma-contiguous: simplify cma_early_percent_memory() · e9aa36cc
      Mike Rapoport authored
      The memory size calculation in cma_early_percent_memory() traverses
      memblock.memory rather than simply call memblock_phys_mem_size().  The
      comment in that function suggests that at some point there should have
      been call to memblock_analyze() before memblock_phys_mem_size() could be
      used.  As of now, there is no memblock_analyze() at all and
      memblock_phys_mem_size() can be used as soon as cold-plug memory is
      registered with memblock.
      
      Replace loop over memblock.memory with a call to memblock_phys_mem_size().
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarBaoquan He <bhe@redhat.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20200818151634.14343-3-rppt@kernel.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e9aa36cc
    • Mike Rapoport's avatar
      KVM: PPC: Book3S HV: simplify kvm_cma_reserve() · 04ba0a92
      Mike Rapoport authored
      Patch series "memblock: seasonal cleaning^w cleanup", v3.
      
      These patches simplify several uses of memblock iterators and hide some of
      the memblock implementation details from the rest of the system.
      
      This patch (of 17):
      
      The memory size calculation in kvm_cma_reserve() traverses memblock.memory
      rather than simply call memblock_phys_mem_size().  The comment in that
      function suggests that at some point there should have been call to
      memblock_analyze() before memblock_phys_mem_size() could be used.  As of
      now, there is no memblock_analyze() at all and memblock_phys_mem_size()
      can be used as soon as cold-plug memory is registered with memblock.
      
      Replace loop over memblock.memory with a call to memblock_phys_mem_size().
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Link: https://lkml.kernel.org/r/20200818151634.14343-1-rppt@kernel.org
      Link: https://lkml.kernel.org/r/20200818151634.14343-2-rppt@kernel.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      04ba0a92
    • Miaohe Lin's avatar
      mm/mempool: add 'else' to split mutually exclusive case · 544941d7
      Miaohe Lin authored
      Add else to split mutually exclusive case and avoid some unnecessary check.
      It doesn't seem to change code generation (compiler is smart), but I think
      it helps readability.
      
      [akpm@linux-foundation.org: fix comment location]
      Signed-off-by: default avatarMiaohe Lin <linmiaohe@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Link: https://lkml.kernel.org/r/20200924111641.28922-1-linmiaohe@huawei.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      544941d7
    • Wei Yang's avatar
      f8fd5253
    • Wei Yang's avatar
      mm/mempolicy: remove or narrow the lock on current · 78b132e9
      Wei Yang authored
      It is not necessary to hold the lock of current when setting nodemask of
      a new policy.
      Signed-off-by: default avatarWei Yang <richard.weiyang@linux.alibaba.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Link: https://lkml.kernel.org/r/20200921040416.86185-1-richard.weiyang@linux.alibaba.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      78b132e9
    • John Hubbard's avatar
      selftests/vm: 8x compaction_test speedup · 11002620
      John Hubbard authored
      This patch reduces the running time for compaction_test from about 27 sec,
      to 3.3 sec, which is about an 8x speedup.
      
      These numbers are for an Intel x86_64 system with 32 GB of DRAM.
      
      The compaction_test.c program was spending most of its time doing mmap(),
      1 MB at a time, on about 25 GB of memory.
      
      Instead, do the mmaps 100 MB at a time.  (Going past 100 MB doesn't make
      things go much faster, because other parts of the program are using the
      remaining time.)
      Signed-off-by: default avatarJohn Hubbard <jhubbard@nvidia.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Acked-by: default avatarSri Jayaramappa <sjayaram@akamai.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Link: https://lkml.kernel.org/r/20201002080621.551044-2-jhubbard@nvidia.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      11002620
    • Mateusz Nosek's avatar
    • Mateusz Nosek's avatar
      mm/compaction.c: micro-optimization remove unnecessary branch · 62b35fe0
      Mateusz Nosek authored
      The same code can work both for 'zone->compact_considered > defer_limit'
      and 'zone->compact_considered >= defer_limit'.  In the latter there is one
      branch less which is more effective considering performance.
      Signed-off-by: default avatarMateusz Nosek <mateusznosek0@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: David Rientjes <rientjes@google.com>
      Link: https://lkml.kernel.org/r/20200913190448.28649-1-mateusznosek0@gmail.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      62b35fe0
    • Xiang Chen's avatar
      mm/zbud: remove redundant initialization · 18601294
      Xiang Chen authored
      zhdr is already initialized in the front of the function, so remove
      redundant initialization here.
      Signed-off-by: default avatarXiang Chen <chenxiang66@hisilicon.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Seth Jennings <sjenning@redhat.com>
      Cc: Dan Streetman <ddstreet@ieee.org>
      Link: https://lkml.kernel.org/r/1600419885-191907-1-git-send-email-chenxiang66@hisilicon.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      18601294
    • Hui Su's avatar
      mm/z3fold.c: use xx_zalloc instead xx_alloc and memset · f94afee9
      Hui Su authored
      alloc_slots() allocates memory for slots using kmem_cache_alloc(), then
      memsets it.  We can just use kmem_cache_zalloc().
      Signed-off-by: default avatarHui Su <sh_def@163.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Link: https://lkml.kernel.org/r/20200926100834.GA184671@rlkSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f94afee9
    • Hui Su's avatar
      mm/vmscan: fix comments for isolate_lru_page() · 01c4776b
      Hui Su authored
      fix comments for isolate_lru_page():
      s/fundamentnal/fundamental
      Signed-off-by: default avatarHui Su <sh_def@163.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Link: https://lkml.kernel.org/r/20200927173923.GA8058@rlkSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      01c4776b
    • Chunxin Zang's avatar
      mm/vmscan: fix infinite loop in drop_slab_node · 069c411d
      Chunxin Zang authored
      We have observed that drop_caches can take a considerable amount of
      time (<put data here>).  Especially when there are many memcgs involved
      because they are adding an additional overhead.
      
      It is quite unfortunate that the operation cannot be interrupted by a
      signal currently.  Add a check for fatal signals into the main loop so
      that userspace can control early bailout.
      
      There are two reasons:
      
      1. We have too many memcgs, even though one object freed in one memcg,
         the sum of object is bigger than 10.
      
      2. We spend a lot of time in traverse memcg once.  So, the memcg who
         traversed at the first have been freed many objects.  Traverse memcg
         next time, the freed count bigger than 10 again.
      
      We can get the following info through 'ps':
      
        root:~# ps -aux | grep drop
        root  357956 ... R    Aug25 21119854:55 echo 3 > /proc/sys/vm/drop_caches
        root 1771385 ... R    Aug16 21146421:17 echo 3 > /proc/sys/vm/drop_caches
        root 1986319 ... R    18:56 117:27 echo 3 > /proc/sys/vm/drop_caches
        root 2002148 ... R    Aug24 5720:39 echo 3 > /proc/sys/vm/drop_caches
        root 2564666 ... R    18:59 113:58 echo 3 > /proc/sys/vm/drop_caches
        root 2639347 ... R    Sep03 2383:39 echo 3 > /proc/sys/vm/drop_caches
        root 3904747 ... R    03:35 993:31 echo 3 > /proc/sys/vm/drop_caches
        root 4016780 ... R    Aug21 7882:18 echo 3 > /proc/sys/vm/drop_caches
      
      Use bpftrace follow 'freed' value in drop_slab_node:
      
        root:~# bpftrace -e 'kprobe:drop_slab_node+70 {@ret=hist(reg("bp")); }'
        Attaching 1 probe...
        ^B^C
      
        @ret:
        [64, 128)        1 |                                                    |
        [128, 256)      28 |                                                    |
        [256, 512)     107 |@                                                   |
        [512, 1K)      298 |@@@                                                 |
        [1K, 2K)       613 |@@@@@@@                                             |
        [2K, 4K)      4435 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
        [4K, 8K)       442 |@@@@@                                               |
        [8K, 16K)      299 |@@@                                                 |
        [16K, 32K)     100 |@                                                   |
        [32K, 64K)     139 |@                                                   |
        [64K, 128K)     56 |                                                    |
        [128K, 256K)    26 |                                                    |
        [256K, 512K)     2 |                                                    |
      
      In the while loop, we can check whether the TASK_KILLABLE signal is set,
      if so, we should break the loop.
      Signed-off-by: default avatarChunxin Zang <zangchunxin@bytedance.com>
      Signed-off-by: default avatarMuchun Song <songmuchun@bytedance.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Acked-by: default avatarChris Down <chris@chrisdown.name>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Matthew Wilcox <willy@infradead.org>
      Link: https://lkml.kernel.org/r/20200909152047.27905-1-zangchunxin@bytedance.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      069c411d
    • Mike Kravetz's avatar
      hugetlb: add lockdep check for i_mmap_rwsem held in huge_pmd_share · 0bf7b64e
      Mike Kravetz authored
      As a debugging aid, huge_pmd_share should make sure i_mmap_rwsem is held
      if necessary.  To clarify the 'if necessary', expand the comment block at
      the beginning of huge_pmd_share.
      
      No functional change.  The added i_mmap_assert_locked() call is only
      enabled if CONFIG_LOCKDEP.
      
      Ideally, this should have been included with commit 34ae204f
      ("hugetlbfs: remove call to huge_pte_alloc without i_mmap_rwsem").
      Signed-off-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Link: https://lkml.kernel.org/r/20200911201248.88537-1-mike.kravetz@oracle.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0bf7b64e
    • Wei Yang's avatar
      mm/hugetlb: take the free hpage during the iteration directly · 6664bfc8
      Wei Yang authored
      Function dequeue_huge_page_node_exact() iterates the free list and return
      the first valid free hpage.
      
      Instead of break and check the loop variant, we could return in the loop
      directly.  This could reduce some redundant check.
      
      [mike.kravetz@oracle.com: points out a logic error]
      [richard.weiyang@linux.alibaba.com: v4]
        Link: https://lkml.kernel.org/r/20200901014636.29737-8-richard.weiyang@linux.alibaba.comSigned-off-by: default avatarWei Yang <richard.weiyang@linux.alibaba.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Link: https://lkml.kernel.org/r/20200831022351.20916-8-richard.weiyang@linux.alibaba.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6664bfc8
    • Wei Yang's avatar
      mm/hugetlb: narrow the hugetlb_lock protection area during preparing huge page · 2f37511c
      Wei Yang authored
      set_hugetlb_cgroup_[rsvd] just manipulate page local data, which is not
      necessary to be protected by hugetlb_lock.
      
      Let's take this out.
      Signed-off-by: default avatarWei Yang <richard.weiyang@linux.alibaba.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarBaoquan He <bhe@redhat.com>
      Reviewed-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Link: https://lkml.kernel.org/r/20200831022351.20916-7-richard.weiyang@linux.alibaba.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2f37511c
    • Wei Yang's avatar
      mm/hugetlb: a page from buddy is not on any list · 15a8d68e
      Wei Yang authored
      The page allocated from buddy is not on any list, so just use list_add()
      is enough.
      Signed-off-by: default avatarWei Yang <richard.weiyang@linux.alibaba.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarBaoquan He <bhe@redhat.com>
      Reviewed-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Link: https://lkml.kernel.org/r/20200831022351.20916-6-richard.weiyang@linux.alibaba.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      15a8d68e
    • Wei Yang's avatar
      mm/hugetlb: count file_region to be added when regions_needed != NULL · 972a3da3
      Wei Yang authored
      There are only two cases of function add_reservation_in_range()
      
          * count file_region and return the number in regions_needed
          * do the real list operation without counting
      
      This means it is not necessary to have two parameters to classify these
      two cases.
      
      Just use regions_needed to separate them.
      Signed-off-by: default avatarWei Yang <richard.weiyang@linux.alibaba.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarBaoquan He <bhe@redhat.com>
      Reviewed-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Link: https://lkml.kernel.org/r/20200831022351.20916-5-richard.weiyang@linux.alibaba.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      972a3da3
    • Wei Yang's avatar
      mm/hugetlb: use list_splice to merge two list at once · d3ec7b6e
      Wei Yang authored
      Instead of add allocated file_region one by one to region_cache, we could
      use list_splice to merge two list at once.
      
      Also we know the number of entries in the list, increase the number
      directly.
      Signed-off-by: default avatarWei Yang <richard.weiyang@linux.alibaba.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarBaoquan He <bhe@redhat.com>
      Reviewed-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Link: https://lkml.kernel.org/r/20200831022351.20916-4-richard.weiyang@linux.alibaba.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d3ec7b6e