1. 14 May, 2015 40 commits
    • Andrea Arcangeli's avatar
      userfaultfd: add userfaultfd_wp mm helpers · daac4c23
      Andrea Arcangeli authored
      These helpers will be used to know if to call handle_userfault() during
      wrprotect faults in order to deliver the wrprotect faults to userland.
      daac4c23
    • Andrea Arcangeli's avatar
      userfaultfd: UFFDIO_REMAP · 093b3515
      Andrea Arcangeli authored
      This remap ioctl allows to atomically move a page in or out of an
      userfaultfd address space. It's more expensive than "copy" (and of
      course more expensive than "zerofill") as it requires a TLB flush on
      the source range for each ioctl, which is an expensive operation on
      SMP. Especially if copying only a few pages at time, copying without
      TLB flush is faster.
      093b3515
    • Andrea Arcangeli's avatar
      userfaultfd: remap_pages: UFFDIO_REMAP preparation · bc2b1bb5
      Andrea Arcangeli authored
      remap_pages is the lowlevel mm helper needed to implement
      UFFDIO_REMAP.
      bc2b1bb5
    • Andrea Arcangeli's avatar
      userfaultfd: UFFDIO_REMAP uABI · faf4506f
      Andrea Arcangeli authored
      This implements the uABI of UFFDIO_REMAP.
      
      Notably one mode bitflag is also forwarded (and in turn known) by the
      lowlevel remap_pages method.
      faf4506f
    • Andrea Arcangeli's avatar
      userfaultfd: remap_pages: swp_entry_swapcount() preparation · b3d3df5a
      Andrea Arcangeli authored
      Provide a new swapfile method for remap_pages() to verify the swap
      entry is mapped only in one vma before relocating the swap entry in a
      different virtual address. Otherwise if the swap entry is mapped in
      multiple vmas, when the page is swapped back in, it could get mapped
      in a non linear way in some anon_vma.
      b3d3df5a
    • Andrea Arcangeli's avatar
      userfaultfd: remap_pages: rmap preparation · 0242f35e
      Andrea Arcangeli authored
      As far as the rmap code is concerned, rmap_pages only alters the
      page->mapping and page->index. It does it while holding the page
      lock. However there are a few places that in presence of anon pages
      are allowed to do rmap walks without the page lock (split_huge_page
      and page_referenced_anon). Those places that are doing rmap walks
      without taking the page lock first, must be updated to re-check that
      the page->mapping didn't change after they obtained the anon_vma
      lock. remap_pages takes the anon_vma lock for writing before altering
      the page->mapping, so if the page->mapping is still the same after
      obtaining the anon_vma lock (without the page lock), the rmap walks
      can go ahead safely (and remap_pages will wait them to complete before
      proceeding).
      
      remap_pages serializes against itself with the page lock.
      
      All other places taking the anon_vma lock while holding the mmap_sem
      for writing, don't need to check if the page->mapping has changed
      after taking the anon_vma lock, regardless of the page lock, because
      remap_pages holds the mmap_sem for reading.
      
      There's one constraint enforced to allow this simplification: the
      source pages passed to remap_pages must be mapped only in one vma, but
      this is not a limitation when used to handle userland page faults. The
      source addresses passed to remap_pages should be set as VM_DONTCOPY
      with MADV_DONTFORK to avoid any risk of the mapcount of the pages
      increasing, if fork runs in parallel in another thread, before or
      while remap_pages runs.
      0242f35e
    • Andrea Arcangeli's avatar
      userfaultfd: UFFDIO_COPY and UFFDIO_ZEROPAGE · 729a7f4e
      Andrea Arcangeli authored
      These two ioctl allows to either atomically copy or to map zeropages
      into the virtual address space. This is used by the thread that opened
      the userfaultfd to resolve the userfaults.
      729a7f4e
    • Andrea Arcangeli's avatar
      userfaultfd: avoid mmap_sem read recursion in mcopy_atomic · 517183a7
      Andrea Arcangeli authored
      If the rwsem starves writers it wasn't strictly a bug but lockdep
      doesn't like it and this avoids depending on lowlevel implementation
      details of the lock.
      517183a7
    • Andrea Arcangeli's avatar
      userfaultfd: mcopy_atomic|mfill_zeropage: UFFDIO_COPY|UFFDIO_ZEROPAGE preparation · 3f65604a
      Andrea Arcangeli authored
      This implements mcopy_atomic and mfill_zeropage that are the lowlevel
      VM methods that are invoked respectively by the UFFDIO_COPY and
      UFFDIO_ZEROPAGE userfaultfd commands.
      3f65604a
    • Andrea Arcangeli's avatar
      userfaultfd: UFFDIO_COPY|UFFDIO_ZEROPAGE uAPI · 8e71cd35
      Andrea Arcangeli authored
      This implements the uABI of UFFDIO_COPY and UFFDIO_ZEROPAGE.
      8e71cd35
    • Andrea Arcangeli's avatar
      userfaultfd: activate syscall · c20828d5
      Andrea Arcangeli authored
      This activates the userfaultfd syscall.
      c20828d5
    • Andrea Arcangeli's avatar
      userfaultfd: buildsystem activation · f2d31f1d
      Andrea Arcangeli authored
      This allows to select the userfaultfd during configuration to build it.
      f2d31f1d
    • Andrea Arcangeli's avatar
      userfaultfd: solve the race between UFFDIO_COPY|ZEROPAGE and read · 5a2b3614
      Andrea Arcangeli authored
      Solve in-kernel the race between UFFDIO_COPY|ZEROPAGE and
      userfaultfd_read if they are run on different threads simultaneously.
      
      Until now qemu solved the race in userland: the race was explicitly
      and intentionally left for userland to solve. However we can also
      solve it in kernel.
      
      Requiring all users to solve this race if they use two threads (one
      for the background transfer and one for the userfault reads) isn't
      very attractive from an API prospective, furthermore this allows to
      remove a whole bunch of mutex and bitmap code from qemu, making it
      faster. The cost of __get_user_pages_fast should be insignificant
      considering it scales perfectly and the pagetables are already hot in
      the CPU cache, compared to the overhead in userland to maintain those
      structures.
      
      Applying this patch is backwards compatible with respect to the
      userfaultfd userland API, however reverting this change wouldn't be
      backwards compatible anymore.
      
      Without this patch qemu in the background transfer thread, has to read
      the old state, and do UFFDIO_WAKE if old_state is missing but it
      become REQUESTED by the time it tries to set it to RECEIVED (signaling
      the other side received an userfault).
      
          vcpu                background_thr userfault_thr
          -----               -----          -----
          vcpu0 handle_mm_fault()
      
      			postcopy_place_page
      			read old_state -> MISSING
       			UFFDIO_COPY 0x7fb76a139000 (no wakeup, still pending)
      
          vcpu0 fault at 0x7fb76a139000 enters handle_userfault
          poll() is kicked
      
       					poll() -> POLLIN
       					read() -> 0x7fb76a139000
       					postcopy_pmi_change_state(MISSING, REQUESTED) -> REQUESTED
      
       			tmp_state = postcopy_pmi_change_state(old_state, RECEIVED) -> REQUESTED
      			/* check that no userfault raced with UFFDIO_COPY */
      			if (old_state == MISSING && tmp_state == REQUESTED)
      				UFFDIO_WAKE from background thread
      
      And a second case where a UFFDIO_WAKE would be needed is in the userfault thread:
      
          vcpu                background_thr userfault_thr
          -----               -----          -----
          vcpu0 handle_mm_fault()
      
      			postcopy_place_page
      			read old_state -> MISSING
       			UFFDIO_COPY 0x7fb76a139000 (no wakeup, still pending)
       			tmp_state = postcopy_pmi_change_state(old_state, RECEIVED) -> RECEIVED
      
          vcpu0 fault at 0x7fb76a139000 enters handle_userfault
          poll() is kicked
      
       					poll() -> POLLIN
       					read() -> 0x7fb76a139000
      
       					if (postcopy_pmi_change_state(MISSING, REQUESTED) == RECEIVED)
      						UFFDIO_WAKE from userfault thread
      
      This patch removes the need of both UFFDIO_WAKE and of the associated
      per-page tristate as well.
      5a2b3614
    • Andrea Arcangeli's avatar
      userfaultfd: allocate the userfaultfd_ctx cacheline aligned · bd0a30cd
      Andrea Arcangeli authored
      Use proper slab to guarantee alignment.
      bd0a30cd
    • Andrea Arcangeli's avatar
      userfaultfd: optimize read() and poll() to be O(1) · 18c5b6c4
      Andrea Arcangeli authored
      This makes read O(1) and poll that was already O(1) becomes lockless.
      18c5b6c4
    • Andrea Arcangeli's avatar
      userfaultfd: wake pending userfaults · a1837777
      Andrea Arcangeli authored
      This is an optimization but it's a userland visible one and it affects
      the API.
      
      The downside of this optimization is that if you call poll() and you
      get POLLIN, read(ufd) may still return -EAGAIN. The blocked userfault
      may be waken by a different thread, before read(ufd) comes
      around. This in short means that poll() isn't really usable if the
      userfaultfd is opened in blocking mode.
      
      userfaults won't wait in "pending" state to be read anymore and any
      UFFDIO_WAKE or similar operations that has the objective of waking
      userfaults after their resolution, will wake all blocked userfaults
      for the resolved range, including those that haven't been read() by
      userland yet.
      
      The behavior of poll() becomes not standard, but this obviates the
      need of "spurious" UFFDIO_WAKE and it lets the userland threads to
      restart immediately without requiring an UFFDIO_WAKE. This is even
      more significant in case of repeated faults on the same address from
      multiple threads.
      
      This optimization is justified by the measurement that the number of
      spurious UFFDIO_WAKE accounts for 5% and 10% of the total
      userfaults for heavy workloads, so it's worth optimizing those away.
      a1837777
    • Andrea Arcangeli's avatar
      userfaultfd: change the read API to return a uffd_msg · a18d6e1c
      Andrea Arcangeli authored
      I had requests to return the full address (not the page aligned one)
      to userland.
      
      It's not entirely clear how the page offset could be relevant because
      userfaults aren't like SIGBUS that can sigjump to a different place
      and it actually skip resolving the fault depending on a page
      offset. There's currently no real way to skip the fault especially
      because after a UFFDIO_COPY|ZEROPAGE, the fault is optimized to be
      retried within the kernel without having to return to userland first
      (not even self modifying code replacing the .text that touched the
      faulting address would prevent the fault to be repeated). Userland
      cannot skip repeating the fault even more so if the fault was
      triggered by a KVM secondary page fault or any get_user_pages or any
      copy-user inside some syscall which will return to kernel code. The
      second time FAULT_FLAG_RETRY_NOWAIT won't be set leading to a SIGBUS
      being raised because the userfault can't wait if it cannot release the
      mmap_map first (and FAULT_FLAG_RETRY_NOWAIT is required for that).
      
      Still returning userland a proper structure during the read() on the
      uffd, can allow to use the current UFFD_API for the future
      non-cooperative extensions too and it looks cleaner as well. Once we
      get additional fields there's no point to return the fault address
      page aligned anymore to reuse the bits below PAGE_SHIFT.
      
      The only downside is that the read() syscall will read 32bytes instead
      of 8bytes but that's not going to be measurable overhead.
      
      The total number of new events that can be extended or of new future
      bits for already shipped events, is limited to 64 by the features
      field of the uffdio_api structure. If more will be needed a bump of
      UFFD_API will be required.
      a18d6e1c
    • Andrea Arcangeli's avatar
      userfaultfd: Rename uffd_api.bits into .features fixup · f050ac8e
      Andrea Arcangeli authored
      Update comment.
      f050ac8e
    • Pavel Emelyanov's avatar
      userfaultfd: Rename uffd_api.bits into .features · b9ca6f1f
      Pavel Emelyanov authored
      This is (seem to be) the minimal thing that is required to unblock
      standard uffd usage from the non-cooperative one. Now more bits can
      be added to the features field indicating e.g. UFFD_FEATURE_FORK and
      others needed for the latter use-case.
      Signed-off-by: default avatarPavel Emelyanov <xemul@parallels.com>
      b9ca6f1f
    • Andrea Arcangeli's avatar
      userfaultfd: add new syscall to provide memory externalization · 2f73ffa8
      Andrea Arcangeli authored
      Once an userfaultfd has been created and certain region of the process
      virtual address space have been registered into it, the thread
      responsible for doing the memory externalization can manage the page
      faults in userland by talking to the kernel using the userfaultfd
      protocol.
      
      poll() can be used to know when there are new pending userfaults to be
      read (POLLIN).
      2f73ffa8
    • Andrea Arcangeli's avatar
      userfaultfd: prevent khugepaged to merge if userfaultfd is armed · 33c24f63
      Andrea Arcangeli authored
      If userfaultfd is armed on a certain vma we can't "fill" the holes
      with zeroes or we'll break the userland on demand paging. The holes if
      the userfault is armed, are really missing information (not zeroes)
      that the userland has to load from network or elsewhere.
      
      The same issue happens for wrprotected ptes that we can't just convert
      into a single writable pmd_trans_huge.
      
      We could however in theory still merge across zeropages if only
      VM_UFFD_MISSING is set (so if VM_UFFD_WP is not set)... that could be
      slightly improved but it'd be much more complex code for a tiny corner
      case.
      33c24f63
    • Andrea Arcangeli's avatar
      userfaultfd: teach vma_merge to merge across vma->vm_userfaultfd_ctx · 868f0d8c
      Andrea Arcangeli authored
      vma->vm_userfaultfd_ctx is yet another vma parameter that vma_merge
      must be aware about so that we can merge vmas back like they were
      originally before arming the userfaultfd on some memory range.
      868f0d8c
    • Andrea Arcangeli's avatar
      userfaultfd: call handle_userfault() for userfaultfd_missing() faults · d574e5aa
      Andrea Arcangeli authored
      This is where the page faults must be modified to call
      handle_userfault() if userfaultfd_missing() is true (so if the
      vma->vm_flags had VM_UFFD_MISSING set).
      
      handle_userfault() then takes care of blocking the page fault and
      delivering it to userland.
      
      The fault flags must also be passed as parameter so the "read|write"
      kind of fault can be passed to userland.
      d574e5aa
    • Andrea Arcangeli's avatar
      userfaultfd: add VM_UFFD_MISSING and VM_UFFD_WP · 3de85438
      Andrea Arcangeli authored
      These two flags gets set in vma->vm_flags to tell the VM common code
      if the userfaultfd is armed and in which mode (only tracking missing
      faults, only tracking wrprotect faults or both). If neither flags is
      set it means the userfaultfd is not armed on the vma.
      3de85438
    • Andrea Arcangeli's avatar
      userfaultfd: add vm_userfaultfd_ctx to the vm_area_struct · 1ec419d8
      Andrea Arcangeli authored
      This adds the vm_userfaultfd_ctx to the vm_area_struct.
      1ec419d8
    • Andrea Arcangeli's avatar
      userfaultfd: linux/userfaultfd_k.h · c6bb4e14
      Andrea Arcangeli authored
      Kernel header defining the methods needed by the VM common code to
      interact with the userfaultfd.
      c6bb4e14
    • Andrea Arcangeli's avatar
      userfaultfd: uAPI · c90748b0
      Andrea Arcangeli authored
      Defines the uAPI of the userfaultfd, notably the ioctl numbers and protocol.
      c90748b0
    • Andrea Arcangeli's avatar
      userfaultfd: waitqueue: add nr wake parameter to __wake_up_locked_key · 34de35c8
      Andrea Arcangeli authored
      userfaultfd needs to wake all waitqueues (pass 0 as nr parameter),
      instead of the current hardcoded 1 (that would wake just the first
      waitqueue in the head list).
      34de35c8
    • Andrea Arcangeli's avatar
      userfaultfd: linux/Documentation/vm/userfaultfd.txt · 87d7a171
      Andrea Arcangeli authored
      Add documentation.
      87d7a171
    • Andrea Arcangeli's avatar
      kvm: fix crash in kvm_vcpu_reload_apic_access_page · 978c6891
      Andrea Arcangeli authored
      memslot->userfault_addr is set by the kernel with a mmap executed
      from the kernel but the userland can still munmap it and lead to the
      below oops after memslot->userfault_addr points to a host virtual
      address that has no vma or mapping.
      
      [  327.538306] BUG: unable to handle kernel paging request at fffffffffffffffe
      [  327.538407] IP: [<ffffffff811a7b55>] put_page+0x5/0x50
      [  327.538474] PGD 1a01067 PUD 1a03067 PMD 0
      [  327.538529] Oops: 0000 [#1] SMP
      [  327.538574] Modules linked in: macvtap macvlan xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack ipt_REJECT iptable_filter ip_tables tun bridge stp llc rpcsec_gss_krb5 nfsv4 dns_resolver nfs fscache xprtrdma ib_isert iscsi_target_mod ib_iser libiscsi scsi_transport_iscsi ib_srpt target_core_mod ib_srp scsi_transport_srp scsi_tgt ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm ipmi_devintf iTCO_wdt iTCO_vendor_support intel_powerclamp coretemp dcdbas intel_rapl kvm_intel kvm crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd pcspkr sb_edac edac_core ipmi_si ipmi_msghandler acpi_pad wmi acpi_power_meter lpc_ich mfd_core mei_me
      [  327.539488]  mei shpchp nfsd auth_rpcgss nfs_acl lockd grace sunrpc mlx4_ib ib_sa ib_mad ib_core mlx4_en vxlan ib_addr ip_tunnel xfs libcrc32c sd_mod crc_t10dif crct10dif_common crc32c_intel mgag200 syscopyarea sysfillrect sysimgblt i2c_algo_bit drm_kms_helper ttm drm ahci i2c_core libahci mlx4_core libata tg3 ptp pps_core megaraid_sas ntb dm_mirror dm_region_hash dm_log dm_mod
      [  327.539956] CPU: 3 PID: 3161 Comm: qemu-kvm Not tainted 3.10.0-240.el7.userfault19.4ca4011.x86_64.debug #1
      [  327.540045] Hardware name: Dell Inc. PowerEdge R420/0CN7CM, BIOS 2.1.2 01/20/2014
      [  327.540115] task: ffff8803280ccf00 ti: ffff880317c58000 task.ti: ffff880317c58000
      [  327.540184] RIP: 0010:[<ffffffff811a7b55>]  [<ffffffff811a7b55>] put_page+0x5/0x50
      [  327.540261] RSP: 0018:ffff880317c5bcf8  EFLAGS: 00010246
      [  327.540313] RAX: 00057ffffffff000 RBX: ffff880616a20000 RCX: 0000000000000000
      [  327.540379] RDX: 0000000000002014 RSI: 00057ffffffff000 RDI: fffffffffffffffe
      [  327.540445] RBP: ffff880317c5bd10 R08: 0000000000000103 R09: 0000000000000000
      [  327.540511] R10: 0000000000000000 R11: 0000000000000000 R12: fffffffffffffffe
      [  327.540576] R13: 0000000000000000 R14: ffff880317c5bd70 R15: ffff880317c5bd50
      [  327.540643] FS:  00007fd230b7f700(0000) GS:ffff880630800000(0000) knlGS:0000000000000000
      [  327.540717] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [  327.540771] CR2: fffffffffffffffe CR3: 000000062a2c3000 CR4: 00000000000427e0
      [  327.540837] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [  327.540904] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      [  327.540974] Stack:
      [  327.541008]  ffffffffa05d6d0c ffff880616a20000 0000000000000000 ffff880317c5bdc0
      [  327.541093]  ffffffffa05ddaa2 0000000000000000 00000000002191bf 00000042f3feab2d
      [  327.541177]  00000042f3feab2d 0000000000000002 0000000000000001 0321000000000000
      [  327.541261] Call Trace:
      [  327.541321]  [<ffffffffa05d6d0c>] ? kvm_vcpu_reload_apic_access_page+0x6c/0x80 [kvm]
      [  327.543615]  [<ffffffffa05ddaa2>] vcpu_enter_guest+0x3f2/0x10f0 [kvm]
      [  327.545918]  [<ffffffffa05e2f10>] kvm_arch_vcpu_ioctl_run+0x2b0/0x5a0 [kvm]
      [  327.548211]  [<ffffffffa05e2d02>] ? kvm_arch_vcpu_ioctl_run+0xa2/0x5a0 [kvm]
      [  327.550500]  [<ffffffffa05ca845>] kvm_vcpu_ioctl+0x2b5/0x680 [kvm]
      [  327.552768]  [<ffffffff810b8d12>] ? creds_are_invalid.part.1+0x12/0x50
      [  327.555069]  [<ffffffff810b8d71>] ? creds_are_invalid+0x21/0x30
      [  327.557373]  [<ffffffff812d6066>] ? inode_has_perm.isra.49.constprop.65+0x26/0x80
      [  327.559663]  [<ffffffff8122d985>] do_vfs_ioctl+0x305/0x530
      [  327.561917]  [<ffffffff8122dc51>] SyS_ioctl+0xa1/0xc0
      [  327.564185]  [<ffffffff816de829>] system_call_fastpath+0x16/0x1b
      [  327.566480] Code: 0b 31 f6 4c 89 e7 e8 4b 7f ff ff 0f 0b e8 24 fd ff ff e9 a9 fd ff ff 66 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 66 66 66 66 90 <48> f7 07 00 c0 00 00 55 48 89 e5 75 2a 8b 47 1c 85 c0 74 1e f0
      978c6891
    • Andrea Arcangeli's avatar
      mm: gup: use get_user_pages_fast instead of get_user_pages_unlocked · bc26cb8c
      Andrea Arcangeli authored
      Just an optimization, where possible use get_user_pages_fast.
      bc26cb8c
    • Andrea Arcangeli's avatar
      mm: gup: make get_user_pages_fast and __get_user_pages_fast latency conscious · 051bd3c6
      Andrea Arcangeli authored
      This teaches gup_fast and __gup_fast to re-enable irqs and
      cond_resched() if possible every BATCH_PAGES.
      
      This must be implemented by other archs as well and it's a requirement
      before converting more get_user_pages() to get_user_pages_fast() as an
      optimization (instead of using get_user_pages_unlocked which would be
      slower).
      051bd3c6
    • Andrea Arcangeli's avatar
      mm: zone_reclaim: compaction: add compaction to zone_reclaim_mode · 123cb69c
      Andrea Arcangeli authored
      This adds compaction to zone_reclaim so THP enabled won't decrease the
      NUMA locality with /proc/sys/vm/zone_reclaim_mode > 0.
      
      It is important to boot with numa_zonelist_order=n (n means nodes) to
      get more accurate NUMA locality if there are multiple zones per node.
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      123cb69c
    • Andrea Arcangeli's avatar
      mm: zone_reclaim: after a successful zone_reclaim check the min watermark · ea160218
      Andrea Arcangeli authored
      If we're in the fast path and we succeeded zone_reclaim(), it means we
      freed enough memory and we can use the min watermark to have some
      margin against concurrent allocations from other CPUs or interrupts.
      ea160218
    • Andrea Arcangeli's avatar
      mm: zone_reclaim: compaction: export compact_zone_order() · b6c2abee
      Andrea Arcangeli authored
      Needed by zone_reclaim_mode compaction-awareness.
      b6c2abee
    • Andrea Arcangeli's avatar
      mm: zone_reclaim: compaction: increase the high order pages in the watermarks · b14cae31
      Andrea Arcangeli authored
      Prevent the scaling down to reduce the watermarks too much.
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      b14cae31
    • Andrea Arcangeli's avatar
      mm: compaction: don't require high order pages below min wmark · 7a271558
      Andrea Arcangeli authored
      The min wmark should be satisfied with just 1 hugepage. And the other
      wmarks should be adjusted accordingly. We need to succeed the low
      wmark check if there's some significant amount of 0 order pages, but
      we don't need plenty of high order pages because the PF_MEMALLOC paths
      don't require those. Creating a ton of high order pages that cannot be
      allocated by the high order allocation paths (no PF_MEMALLOC) is quite
      wasteful because they can be splitted in lower order pages before
      anybody has a chance to allocate them.
      7a271558
    • Andrea Arcangeli's avatar
      mm: zone_reclaim: compaction: don't depend on kswapd to invoke reset_isolation_suitable · af24516e
      Andrea Arcangeli authored
      If kswapd never need to run (only __GFP_NO_KSWAPD allocations and
      plenty of free memory) compaction is otherwise crippled down and stops
      running for a while after the free/isolation cursor meets. After that
      allocation can fail for a full cycle of compaction_deferred, until
      compaction_restarting finally reset it again.
      
      Stopping compaction for a full cycle after the cursor meets, even if
      it never failed and it's not going to fail, doesn't make sense.
      
      We already throttle compaction CPU utilization using
      defer_compaction. We shouldn't prevent compaction to run after each
      pass completes when the cursor meets, unless it failed.
      
      This makes direct compaction functional again. The throttling of
      direct compaction is still controlled by the defer_compaction
      logic.
      
      kswapd still won't risk to reset compaction, and it will wait direct
      compaction to do so. Not sure if this is ideal but it at least
      decreases the risk of kswapd doing too much work. kswapd will only run
      one pass of compaction until some allocation invokes compaction again.
      
      This decreased reliability of compaction was introduced in commit
      62997027 .
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarRafael Aquini <aquini@redhat.com>
      Acked-by: default avatarMel Gorman <mgorman@suse.de>
      af24516e
    • Andrea Arcangeli's avatar
      mm: zone_reclaim: compaction: scan all memory with /proc/sys/vm/compact_memory · 02eaa78b
      Andrea Arcangeli authored
      Reset the stats so /proc/sys/vm/compact_memory will scan all memory.
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarRafael Aquini <aquini@redhat.com>
      Acked-by: default avatarMel Gorman <mgorman@suse.de>
      02eaa78b
    • Andrea Arcangeli's avatar
      mm: zone_reclaim: remove ZONE_RECLAIM_LOCKED · 43dc77b0
      Andrea Arcangeli authored
      Zone reclaim locked breaks zone_reclaim_mode=1. If more than one
      thread allocates memory at the same time, it forces a premature
      allocation into remote NUMA nodes even when there's plenty of clean
      cache to reclaim in the local nodes.
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarRafael Aquini <aquini@redhat.com>
      Acked-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      43dc77b0