1. 01 Sep, 2017 2 commits
    • Linus Torvalds's avatar
      Merge branch 'mmu_notifier_fixes' · ea25c431
      Linus Torvalds authored
      Merge mmu_notifier fixes from Jérôme Glisse:
       "The invalidate_page callback suffered from 2 pitfalls. First it used
        to happen after page table lock was release and thus a new page might
        have been setup for the virtual address before the call to
        invalidate_page().
      
        This is in a weird way fixed by commit c7ab0d2f ("mm: convert
        try_to_unmap_one() to use page_vma_mapped_walk()") which moved the
        callback under the page table lock. Which also broke several existing
        user of the mmu_notifier API that assumed they could sleep inside this
        callback.
      
        The second pitfall was invalidate_page being the only callback not
        taking a range of address in respect to invalidation but was giving an
        address and a page. Lot of the callback implementer assumed this could
        never be THP and thus failed to invalidate the appropriate range for
        THP pages.
      
        By killing this callback we unify the mmu_notifier callback API to
        always take a virtual address range as input.
      
        There is now two clear API (I am not mentioning the youngess API which
        is seldomly used):
      
         - invalidate_range_start()/end() callback (which allow you to sleep)
      
         - invalidate_range() where you can not sleep but happen right after
           page table update under page table lock
      
        Note that a lot of existing user feels broken in respect to
        range_start/ range_end. Many user only have range_start() callback but
        there is nothing preventing them to undo what was invalidated in their
        range_start() callback after it returns but before any CPU page table
        update take place.
      
        The code pattern use in kvm or umem odp is an example on how to
        properly avoid such race. In a nutshell use some kind of sequence
        number and active range invalidation counter to block anything that
        might undo what the range_start() callback did.
      
        If you do not care about keeping fully in sync with CPU page table (ie
        you can live with CPU page table pointing to new different page for a
        given virtual address) then you can take a reference on the pages
        inside the range_start callback and drop it in range_end or when your
        driver is done with those pages.
      
        Last alternative is to use invalidate_range() if you can do
        invalidation without sleeping as invalidate_range() callback happens
        under the CPU page table spinlock right after the page table is
        updated.
      
        The first two patches convert existing mmu_notifier_invalidate_page()
        calls to mmu_notifier_invalidate_range() and bracket those call with
        call to mmu_notifier_invalidate_range_start()/end().
      
        The next ten patches remove existing invalidate_page() callback as it
        can no longer happen.
      
        Finally the last page remove the invalidate_page() callback completely
        so it can RIP.
      
        Changes since v1:
         - remove more dead code in kvm (no testing impact)
         - more accurate end address computation (patch 2) in page_mkclean_one
           and try_to_unmap_one
         - added tested-by/reviewed-by gotten so far"
      
      * emailed patches from Jérôme Glisse <jglisse@redhat.com>:
        mm/mmu_notifier: kill invalidate_page
        KVM: update to new mmu_notifier semantic v2
        xen/gntdev: update to new mmu_notifier semantic
        sgi-gru: update to new mmu_notifier semantic
        misc/mic/scif: update to new mmu_notifier semantic
        iommu/intel: update to new mmu_notifier semantic
        iommu/amd: update to new mmu_notifier semantic
        IB/hfi1: update to new mmu_notifier semantic
        IB/umem: update to new mmu_notifier semantic
        drm/amdgpu: update to new mmu_notifier semantic
        powerpc/powernv: update to new mmu_notifier semantic
        mm/rmap: update to new mmu_notifier semantic v2
        dax: update to new mmu_notifier semantic
      ea25c431
    • Dave Kleikamp's avatar
      jfs should use MAX_LFS_FILESIZE when calculating s_maxbytes · c227390c
      Dave Kleikamp authored
      jfs had previously avoided the use of MAX_LFS_FILESIZE because it hadn't
      accounted for the whole 32-bit index range on 32-bit systems.  That has
      been fixed by commit 0cc3b0ec ("Clarify (and fix) MAX_LFS_FILESIZE
      macros"), so we can simplify the code now.
      
      Suggested by Andreas Dilger.
      Signed-off-by: default avatarDave Kleikamp <dave.kleikamp@oracle.com>
      Reviewed-by: default avatarAndreas Dilger <adilger@dilger.ca>
      Cc: jfs-discussion@lists.sourceforge.net
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c227390c
  2. 31 Aug, 2017 13 commits
    • Jérôme Glisse's avatar
      mm/mmu_notifier: kill invalidate_page · 5f32b265
      Jérôme Glisse authored
      The invalidate_page callback suffered from two pitfalls.  First it used
      to happen after the page table lock was release and thus a new page
      might have setup before the call to invalidate_page() happened.
      
      This is in a weird way fixed by commit c7ab0d2f ("mm: convert
      try_to_unmap_one() to use page_vma_mapped_walk()") that moved the
      callback under the page table lock but this also broke several existing
      users of the mmu_notifier API that assumed they could sleep inside this
      callback.
      
      The second pitfall was invalidate_page() being the only callback not
      taking a range of address in respect to invalidation but was giving an
      address and a page.  Lots of the callback implementers assumed this
      could never be THP and thus failed to invalidate the appropriate range
      for THP.
      
      By killing this callback we unify the mmu_notifier callback API to
      always take a virtual address range as input.
      
      Finally this also simplifies the end user life as there is now two clear
      choices:
        - invalidate_range_start()/end() callback (which allow you to sleep)
        - invalidate_range() where you can not sleep but happen right after
          page table update under page table lock
      Signed-off-by: default avatarJérôme Glisse <jglisse@redhat.com>
      Cc: Bernhard Held <berny156@gmx.de>
      Cc: Adam Borowski <kilobyte@angband.pl>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Wanpeng Li <kernellwp@gmail.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Takashi Iwai <tiwai@suse.de>
      Cc: Nadav Amit <nadav.amit@gmail.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: axie <axie@amd.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5f32b265
    • Jérôme Glisse's avatar
      KVM: update to new mmu_notifier semantic v2 · fb1522e0
      Jérôme Glisse authored
      Calls to mmu_notifier_invalidate_page() were replaced by calls to
      mmu_notifier_invalidate_range() and are now bracketed by calls to
      mmu_notifier_invalidate_range_start()/end()
      
      Remove now useless invalidate_page callback.
      
      Changed since v1 (Linus Torvalds)
          - remove now useless kvm_arch_mmu_notifier_invalidate_page()
      Signed-off-by: default avatarJérôme Glisse <jglisse@redhat.com>
      Tested-by: default avatarMike Galbraith <efault@gmx.de>
      Tested-by: default avatarAdam Borowski <kilobyte@angband.pl>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: kvm@vger.kernel.org
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fb1522e0
    • Jérôme Glisse's avatar
      xen/gntdev: update to new mmu_notifier semantic · a81461b0
      Jérôme Glisse authored
      Calls to mmu_notifier_invalidate_page() were replaced by calls to
      mmu_notifier_invalidate_range() and are now bracketed by calls to
      mmu_notifier_invalidate_range_start()/end()
      
      Remove now useless invalidate_page callback.
      Signed-off-by: default avatarJérôme Glisse <jglisse@redhat.com>
      Reviewed-by: default avatarBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Roger Pau Monné <roger.pau@citrix.com>
      Cc: xen-devel@lists.xenproject.org (moderated for non-subscribers)
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a81461b0
    • Jérôme Glisse's avatar
      sgi-gru: update to new mmu_notifier semantic · a4870125
      Jérôme Glisse authored
      Calls to mmu_notifier_invalidate_page() were replaced by calls to
      mmu_notifier_invalidate_range() and are now bracketed by calls to
      mmu_notifier_invalidate_range_start()/end()
      
      Remove now useless invalidate_page callback.
      Signed-off-by: default avatarJérôme Glisse <jglisse@redhat.com>
      Cc: Dimitri Sivanich <sivanich@sgi.com>
      Cc: Jack Steiner <steiner@sgi.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a4870125
    • Jérôme Glisse's avatar
      misc/mic/scif: update to new mmu_notifier semantic · 192e8564
      Jérôme Glisse authored
      Calls to mmu_notifier_invalidate_page() were replaced by calls to
      mmu_notifier_invalidate_range() and are now bracketed by calls to
      mmu_notifier_invalidate_range_start()/end()
      
      Remove now useless invalidate_page callback.
      Signed-off-by: default avatarJérôme Glisse <jglisse@redhat.com>
      Cc: Sudeep Dutt <sudeep.dutt@intel.com>
      Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      192e8564
    • Jérôme Glisse's avatar
      iommu/intel: update to new mmu_notifier semantic · 30ef7d2c
      Jérôme Glisse authored
      Calls to mmu_notifier_invalidate_page() were replaced by calls to
      mmu_notifier_invalidate_range() and are now bracketed by calls to
      mmu_notifier_invalidate_range_start()/end()
      
      Remove now useless invalidate_page callback.
      Signed-off-by: default avatarJérôme Glisse <jglisse@redhat.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: iommu@lists.linux-foundation.org
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      30ef7d2c
    • Jérôme Glisse's avatar
      iommu/amd: update to new mmu_notifier semantic · f0d1c713
      Jérôme Glisse authored
      Calls to mmu_notifier_invalidate_page() were replaced by calls to
      mmu_notifier_invalidate_range() and are now bracketed by calls to
      mmu_notifier_invalidate_range_start()/end()
      
      Remove now useless invalidate_page callback.
      Signed-off-by: default avatarJérôme Glisse <jglisse@redhat.com>
      Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
      Cc: iommu@lists.linux-foundation.org
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f0d1c713
    • Jérôme Glisse's avatar
      IB/hfi1: update to new mmu_notifier semantic · 7def96f0
      Jérôme Glisse authored
      Calls to mmu_notifier_invalidate_page() were replaced by calls to
      mmu_notifier_invalidate_range() and are now bracketed by calls to
      mmu_notifier_invalidate_range_start()/end()
      
      Remove now useless invalidate_page callback.
      Signed-off-by: default avatarJérôme Glisse <jglisse@redhat.com>
      Cc: linux-rdma@vger.kernel.org
      Cc: Dean Luick <dean.luick@intel.com>
      Cc: Ira Weiny <ira.weiny@intel.com>
      Cc: Doug Ledford <dledford@redhat.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7def96f0
    • Jérôme Glisse's avatar
      IB/umem: update to new mmu_notifier semantic · b1a89257
      Jérôme Glisse authored
      Calls to mmu_notifier_invalidate_page() were replaced by calls to
      mmu_notifier_invalidate_range() and are now bracketed by calls to
      mmu_notifier_invalidate_range_start()/end()
      
      Remove now useless invalidate_page callback.
      Signed-off-by: default avatarJérôme Glisse <jglisse@redhat.com>
      Tested-by: default avatarLeon Romanovsky <leonro@mellanox.com>
      Cc: linux-rdma@vger.kernel.org
      Cc: Artemy Kovalyov <artemyko@mellanox.com>
      Cc: Doug Ledford <dledford@redhat.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b1a89257
    • Jérôme Glisse's avatar
      drm/amdgpu: update to new mmu_notifier semantic · c90270a9
      Jérôme Glisse authored
      Calls to mmu_notifier_invalidate_page() were replaced by calls to
      mmu_notifier_invalidate_range() and are now bracketed by calls to
      mmu_notifier_invalidate_range_start()/end()
      
      Remove now useless invalidate_page callback.
      Signed-off-by: default avatarJérôme Glisse <jglisse@redhat.com>
      Reviewed-by: default avatarChristian König <christian.koenig@amd.com>
      Cc: amd-gfx@lists.freedesktop.org
      Cc: Felix Kuehling <Felix.Kuehling@amd.com>
      Cc: Christian König <christian.koenig@amd.com>
      Cc: Alex Deucher <alexander.deucher@amd.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c90270a9
    • Jérôme Glisse's avatar
      powerpc/powernv: update to new mmu_notifier semantic · d1d5762e
      Jérôme Glisse authored
      Calls to mmu_notifier_invalidate_page() were replaced by calls to
      mmu_notifier_invalidate_range() and now are bracketed by calls to
      mmu_notifier_invalidate_range_start()/end()
      
      Remove now useless invalidate_page callback.
      Signed-off-by: default avatarJérôme Glisse <jglisse@redhat.com>
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: Alistair Popple <alistair@popple.id.au>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d1d5762e
    • Jérôme Glisse's avatar
      mm/rmap: update to new mmu_notifier semantic v2 · 369ea824
      Jérôme Glisse authored
      Replace all mmu_notifier_invalidate_page() calls by *_invalidate_range()
      and make sure it is bracketed by calls to *_invalidate_range_start()/end().
      
      Note that because we can not presume the pmd value or pte value we have
      to assume the worst and unconditionaly report an invalidation as
      happening.
      
      Changed since v2:
        - try_to_unmap_one() only one call to mmu_notifier_invalidate_range()
        - compute end with PAGE_SIZE << compound_order(page)
        - fix PageHuge() case in try_to_unmap_one()
      Signed-off-by: default avatarJérôme Glisse <jglisse@redhat.com>
      Reviewed-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Bernhard Held <berny156@gmx.de>
      Cc: Adam Borowski <kilobyte@angband.pl>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Wanpeng Li <kernellwp@gmail.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Takashi Iwai <tiwai@suse.de>
      Cc: Nadav Amit <nadav.amit@gmail.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: axie <axie@amd.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      369ea824
    • Jérôme Glisse's avatar
      dax: update to new mmu_notifier semantic · a4d1a885
      Jérôme Glisse authored
      Replace all mmu_notifier_invalidate_page() calls by *_invalidate_range()
      and make sure it is bracketed by calls to *_invalidate_range_start()/end().
      
      Note that because we can not presume the pmd value or pte value we have
      to assume the worst and unconditionaly report an invalidation as
      happening.
      Signed-off-by: default avatarJérôme Glisse <jglisse@redhat.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Bernhard Held <berny156@gmx.de>
      Cc: Adam Borowski <kilobyte@angband.pl>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Wanpeng Li <kernellwp@gmail.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Takashi Iwai <tiwai@suse.de>
      Cc: Nadav Amit <nadav.amit@gmail.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: axie <axie@amd.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a4d1a885
  3. 30 Aug, 2017 6 commits
  4. 29 Aug, 2017 15 commits
  5. 28 Aug, 2017 4 commits
    • Linus Torvalds's avatar
      page waitqueue: always add new entries at the end · 9c3a815f
      Linus Torvalds authored
      Commit 3510ca20 ("Minor page waitqueue cleanups") made the page
      queue code always add new waiters to the back of the queue, which helps
      upcoming patches to batch the wakeups for some horrid loads where the
      wait queues grow to thousands of entries.
      
      However, I forgot about the nasrt add_page_wait_queue() special case
      code that is only used by the cachefiles code.  That one still continued
      to add the new wait queue entries at the beginning of the list.
      
      Fix it, because any sane batched wakeup will require that we don't
      suddenly start getting new entries at the beginning of the list that we
      already handled in a previous batch.
      
      [ The current code always does the whole list while holding the lock, so
        wait queue ordering doesn't matter for correctness, but even then it's
        better to add later entries at the end from a fairness standpoint ]
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9c3a815f
    • Tejun Heo's avatar
      cpumask: fix spurious cpumask_of_node() on non-NUMA multi-node configs · b339752d
      Tejun Heo authored
      When !NUMA, cpumask_of_node(@node) equals cpu_online_mask regardless of
      @node.  The assumption seems that if !NUMA, there shouldn't be more than
      one node and thus reporting cpu_online_mask regardless of @node is
      correct.  However, that assumption was broken years ago to support
      DISCONTIGMEM and whether a system has multiple nodes or not is
      separately controlled by NEED_MULTIPLE_NODES.
      
      This means that, on a system with !NUMA && NEED_MULTIPLE_NODES,
      cpumask_of_node() will report cpu_online_mask for all possible nodes,
      indicating that the CPUs are associated with multiple nodes which is an
      impossible configuration.
      
      This bug has been around forever but doesn't look like it has caused any
      noticeable symptoms.  However, it triggers a WARN recently added to
      workqueue to verify NUMA affinity configuration.
      
      Fix it by reporting empty cpumask on non-zero nodes if !NUMA.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reported-and-tested-by: default avatarGeert Uytterhoeven <geert@linux-m68k.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b339752d
    • Alexey Brodkin's avatar
      ARCv2: SMP: Mask only private-per-core IRQ lines on boot at core intc · e8206d2b
      Alexey Brodkin authored
      Recent commit a8ec3ee8 "arc: Mask individual IRQ lines during core
      INTC init" breaks interrupt handling on ARCv2 SMP systems.
      
      That commit masked all interrupts at onset, as some controllers on some
      boards (customer as well as internal), would assert interrutps early
      before any handlers were installed.  For SMP systems, the masking was
      done at each cpu's core-intc.  Later, when the IRQ was actually
      requested, it was unmasked, but only on the requesting cpu.
      
      For "common" interrupts, which were wired up from the 2nd level IDU
      intc, this was as issue as they needed to be enabled on ALL the cpus
      (given that IDU IRQs are by default served Round Robin across cpus)
      
      So fix that by NOT masking "common" interrupts at core-intc, but instead
      at the 2nd level IDU intc (latter already being done in idu_of_init())
      
      Fixes: a8ec3ee8 ("arc: Mask individual IRQ lines during core INTC init")
      Signed-off-by: default avatarAlexey Brodkin <abrodkin@synopsys.com>
      [vgupta: reworked changelog, removed the extraneous idu_irq_mask_raw()]
      Signed-off-by: default avatarVineet Gupta <vgupta@synopsys.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e8206d2b
    • Helge Deller's avatar
      fs/select: Fix memory corruption in compat_get_fd_set() · 79de3cbe
      Helge Deller authored
      Commit 464d6242 ("select: switch compat_{get,put}_fd_set() to
      compat_{get,put}_bitmap()") changed the calculation on how many bytes
      need to be zeroed when userspace handed over a NULL pointer for a fdset
      array in the select syscall.
      
      The calculation was changed in compat_get_fd_set() wrongly from
      	memset(fdset, 0, ((nr + 1) & ~1)*sizeof(compat_ulong_t));
      to
      	memset(fdset, 0, ALIGN(nr, BITS_PER_LONG));
      
      The ALIGN(nr, BITS_PER_LONG) calculates the number of _bits_ which need
      to be zeroed in the target fdset array (rounded up to the next full bits
      for an unsigned long).
      
      But the memset() call expects the number of _bytes_ to be zeroed.
      
      This leads to clearing more memory than wanted (on the stack area or
      even at kmalloc()ed memory areas) and to random kernel crashes as we
      have seen them on the parisc platform.
      
      The correct change should have been
      
      	memset(fdset, 0, (ALIGN(nr, BITS_PER_LONG) / BITS_PER_LONG) * BYTES_PER_LONG);
      
      which is the same as can be archieved with a call to
      
      	zero_fd_set(nr, fdset).
      
      Fixes: 464d6242 ("select: switch compat_{get,put}_fd_set() to compat_{get,put}_bitmap()"
      Acked-by: default avatar: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      79de3cbe