1. 20 Sep, 2017 40 commits
    • Darrick J. Wong's avatar
      xfs: check _btree_check_block value · a6247b01
      Darrick J. Wong authored
      commit 1e86eabe upstream.
      
      Check the _btree_check_block return value for the firstrec and lastrec
      functions, since we have the ability to signal that the repositioning
      did not succeed.
      
      Fixes-coverity-id: 114067
      Fixes-coverity-id: 114068
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: default avatarBrian Foster <bfoster@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a6247b01
    • Darrick J. Wong's avatar
      xfs: don't crash on unexpected holes in dir/attr btrees · e76496fa
      Darrick J. Wong authored
      commit cd87d867 upstream.
      
      In quite a few places we call xfs_da_read_buf with a mappedbno that we
      don't control, then assume that the function passes back either an error
      code or a buffer pointer.  Unfortunately, if mappedbno == -2 and bno
      maps to a hole, we get a return code of zero and a NULL buffer, which
      means that we crash if we actually try to use that buffer pointer.  This
      happens immediately when we set the buffer type for transaction context.
      
      Therefore, check that we have no error code and a non-NULL bp before
      trying to use bp.  This patch is a follow-up to an incomplete fix in
      96a3aefb ("xfs: don't crash if reading a directory results in an
      unexpected hole").
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e76496fa
    • Brian Foster's avatar
      xfs: free cowblocks and retry on buffered write ENOSPC · b46382f0
      Brian Foster authored
      commit cf2cb784 upstream.
      
      XFS runs an eofblocks reclaim scan before returning an ENOSPC error to
      userspace for buffered writes. This facilitates aggressive speculative
      preallocation without causing user visible side effects such as
      premature ENOSPC.
      
      Run a cowblocks scan in the same situation to reclaim lingering COW fork
      preallocation throughout the filesystem.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b46382f0
    • Brian Foster's avatar
      xfs: free uncommitted transactions during log recovery · 171192c9
      Brian Foster authored
      commit 39775431 upstream.
      
      Log recovery allocates in-core transaction and member item data
      structures on-demand as it processes the on-disk log. Transactions
      are allocated on first encounter on-disk and stored in a hash table
      structure where they are easily accessible for subsequent lookups.
      Transaction items are also allocated on demand and are attached to
      the associated transactions.
      
      When a commit record is encountered in the log, the transaction is
      committed to the fs and the in-core structures are freed. If a
      filesystem crashes or shuts down before all in-core log buffers are
      flushed to the log, however, not all transactions may have commit
      records in the log. As expected, the modifications in such an
      incomplete transaction are not replayed to the fs. The in-core data
      structures for the partial transaction are never freed, however,
      resulting in a memory leak.
      
      Update xlog_do_recovery_pass() to first correctly initialize the
      hash table array so empty lists can be distinguished from populated
      lists on function exit. Update xlog_recover_free_trans() to always
      remove the transaction from the list prior to freeing the associated
      memory. Finally, walk the hash table of transaction lists as the
      last step before it goes out of scope and free any transactions that
      may remain on the lists. This prevents a memory leak of partial
      transactions in the log.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      171192c9
    • Darrick J. Wong's avatar
      xfs: don't allow bmap on rt files · 621d0b75
      Darrick J. Wong authored
      commit 61d819e7 upstream.
      
      bmap returns a dumb LBA address but not the block device that goes with
      that LBA.  Swapfiles don't care about this and will blindly assume that
      the data volume is the correct blockdev, which is totally bogus for
      files on the rt subvolume.  This results in the swap code doing IOs to
      arbitrary locations on the data device(!) if the passed in mapping is a
      realtime file, so just turn off bmap for rt files.
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      
      621d0b75
    • Brian Foster's avatar
      xfs: remove bli from AIL before release on transaction abort · 8913492d
      Brian Foster authored
      commit 3d4b4a3e upstream.
      
      When a buffer is modified, logged and committed, it ultimately ends
      up sitting on the AIL with a dirty bli waiting for metadata
      writeback. If another transaction locks and invalidates the buffer
      (freeing an inode chunk, for example) in the meantime, the bli is
      flagged as stale, the dirty state is cleared and the bli remains in
      the AIL.
      
      If a shutdown occurs before the transaction that has invalidated the
      buffer is committed, the transaction is ultimately aborted. The log
      items are flagged as such and ->iop_unlock() handles the aborted
      items. Because the bli is clean (due to the invalidation),
      ->iop_unlock() unconditionally releases it. The log item may still
      reside in the AIL, however, which means the I/O completion handler
      may still run and attempt to access it. This results in assert
      failure due to the release of the bli while still present in the AIL
      and a subsequent NULL dereference and panic in the buffer I/O
      completion handling. This can be reproduced by running generic/388
      in repetition.
      
      To avoid this problem, update xfs_buf_item_unlock() to first check
      whether the bli is aborted and if so, remove it from the AIL before
      it is released. This ensures that the bli is no longer accessed
      during the shutdown sequence after it has been freed.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarCarlos Maiolino <cmaiolino@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      8913492d
    • Brian Foster's avatar
      xfs: release bli from transaction properly on fs shutdown · 6c0ecde2
      Brian Foster authored
      commit 79e641ce upstream.
      
      If a filesystem shutdown occurs with a buffer log item in the CIL
      and a log force occurs, the ->iop_unpin() handler is generally
      expected to tear down the bli properly. This entails freeing the bli
      memory and releasing the associated hold on the buffer so it can be
      released and the filesystem unmounted.
      
      If this sequence occurs while ->bli_refcount is elevated (i.e.,
      another transaction is open and attempting to modify the buffer),
      however, ->iop_unpin() may not be responsible for releasing the bli.
      Instead, the transaction may release the final ->bli_refcount
      reference and thus xfs_trans_brelse() is responsible for tearing
      down the bli.
      
      While xfs_trans_brelse() does drop the reference count, it only
      attempts to release the bli if it is clean (i.e., not in the
      CIL/AIL). If the filesystem is shutdown and the bli is sitting dirty
      in the CIL as noted above, this ends up skipping the last
      opportunity to release the bli. In turn, this leaves the hold on the
      buffer and causes an unmount hang. This can be reproduced by running
      generic/388 in repetition.
      
      Update xfs_trans_brelse() to handle this shutdown corner case
      correctly. If the final bli reference is dropped and the filesystem
      is shutdown, remove the bli from the AIL (if necessary) and release
      the bli to drop the buffer hold and ensure an unmount does not hang.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarCarlos Maiolino <cmaiolino@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6c0ecde2
    • Darrick J. Wong's avatar
      xfs: try to avoid blowing out the transaction reservation when bunmaping a shared extent · ce83e494
      Darrick J. Wong authored
      commit e1a4e37c upstream.
      
      In a pathological scenario where we are trying to bunmapi a single
      extent in which every other block is shared, it's possible that trying
      to unmap the entire large extent in a single transaction can generate so
      many EFIs that we overflow the transaction reservation.
      
      Therefore, use a heuristic to guess at the number of blocks we can
      safely unmap from a reflink file's data fork in an single transaction.
      This should prevent problems such as the log head slamming into the tail
      and ASSERTs that trigger because we've exceeded the transaction
      reservation.
      
      Note that since bunmapi can fail to unmap the entire range, we must also
      teach the deferred unmap code to roll into a new transaction whenever we
      get low on reservation.
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      [hch: random edits, all bugs are my fault]
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ce83e494
    • Brian Foster's avatar
      xfs: push buffer of flush locked dquot to avoid quotacheck deadlock · 7cb011bb
      Brian Foster authored
      commit 7912e7fe upstream.
      
      Reclaim during quotacheck can lead to deadlocks on the dquot flush
      lock:
      
       - Quotacheck populates a local delwri queue with the physical dquot
         buffers.
       - Quotacheck performs the xfs_qm_dqusage_adjust() bulkstat and
         dirties all of the dquots.
       - Reclaim kicks in and attempts to flush a dquot whose buffer is
         already queud on the quotacheck queue. The flush succeeds but
         queueing to the reclaim delwri queue fails as the backing buffer is
         already queued. The flush unlock is now deferred to I/O completion
         of the buffer from the quotacheck queue.
       - The dqadjust bulkstat continues and dirties the recently flushed
         dquot once again.
       - Quotacheck proceeds to the xfs_qm_flush_one() walk which requires
         the flush lock to update the backing buffers with the in-core
         recalculated values. It deadlocks on the redirtied dquot as the
         flush lock was already acquired by reclaim, but the buffer resides
         on the local delwri queue which isn't submitted until the end of
         quotacheck.
      
      This is reproduced by running quotacheck on a filesystem with a
      couple million inodes in low memory (512MB-1GB) situations. This is
      a regression as of commit 43ff2122 ("xfs: on-stack delayed write
      buffer lists"), which removed a trylock and buffer I/O submission
      from the quotacheck dquot flush sequence.
      
      Quotacheck first resets and collects the physical dquot buffers in a
      delwri queue. Then, it traverses the filesystem inodes via bulkstat,
      updates the in-core dquots, flushes the corrected dquots to the
      backing buffers and finally submits the delwri queue for I/O. Since
      the backing buffers are queued across the entire quotacheck
      operation, dquot reclaim cannot possibly complete a dquot flush
      before quotacheck completes.
      
      Therefore, quotacheck must submit the buffer for I/O in order to
      cycle the flush lock and flush the dirty in-core dquot to the
      buffer. Add a delwri queue buffer push mechanism to submit an
      individual buffer for I/O without losing the delwri queue status and
      use it from quotacheck to avoid the deadlock. This restores
      quotacheck behavior to as before the regression was introduced.
      Reported-by: default avatarMartin Svec <martin.svec@zoner.cz>
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7cb011bb
    • Brian Foster's avatar
      xfs: fix spurious spin_is_locked() assert failures on non-smp kernels · 85ab1b23
      Brian Foster authored
      commit 95989c46 upstream.
      
      The 0-day kernel test robot reports assertion failures on
      !CONFIG_SMP kernels due to failed spin_is_locked() checks. As it
      turns out, spin_is_locked() is hardcoded to return zero on
      !CONFIG_SMP kernels and so this function cannot be relied on to
      verify spinlock state in this configuration.
      
      To avoid this problem, replace the associated asserts with lockdep
      variants that do the right thing regardless of kernel configuration.
      Drop the one assert that checks for an unlocked lock as there is no
      suitable lockdep variant for that case. This moves the spinlock
      checks from XFS debug code to lockdep, but generally provides the
      same level of protection.
      Reported-by: default avatarkbuild test robot <fengguang.wu@intel.com>
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      85ab1b23
    • Jan Kara's avatar
      xfs: Move handling of missing page into one place in xfs_find_get_desired_pgoff() · 4c1d33c4
      Jan Kara authored
      commit a54fba8f upstream.
      
      Currently several places in xfs_find_get_desired_pgoff() handle the case
      of a missing page. Make them all handled in one place after the loop has
      terminated.
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      Reviewed-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4c1d33c4
    • Andy Lutomirski's avatar
      x86/switch_to/64: Rewrite FS/GS switching yet again to fix AMD CPUs · 3fddeb80
      Andy Lutomirski authored
      commit e137a4d8 upstream.
      
      Switching FS and GS is a mess, and the current code is still subtly
      wrong: it assumes that "Loading a nonzero value into FS sets the
      index and base", which is false on AMD CPUs if the value being
      loaded is 1, 2, or 3.
      
      (The current code came from commit 3e2b68d7 ("x86/asm,
      sched/x86: Rewrite the FS and GS context switch code"), which made
      it better but didn't fully fix it.)
      
      Rewrite it to be much simpler and more obviously correct.  This
      should fix it fully on AMD CPUs and shouldn't adversely affect
      performance.
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Chang Seok <chang.seok.bae@intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3fddeb80
    • Andy Lutomirski's avatar
      x86/fsgsbase/64: Report FSBASE and GSBASE correctly in core dumps · 0caec706
      Andy Lutomirski authored
      commit 9584d98b upstream.
      
      In ELF_COPY_CORE_REGS, we're copying from the current task, so
      accessing thread.fsbase and thread.gsbase makes no sense.  Just read
      the values from the CPU registers.
      
      In practice, the old code would have been correct most of the time
      simply because thread.fsbase and thread.gsbase usually matched the
      CPU registers.
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Chang Seok <chang.seok.bae@intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      0caec706
    • Andy Lutomirski's avatar
      x86/fsgsbase/64: Fully initialize FS and GS state in start_thread_common · c7d1ddec
      Andy Lutomirski authored
      commit 767d035d upstream.
      
      execve used to leak FSBASE and GSBASE on AMD CPUs.  Fix it.
      
      The security impact of this bug is small but not quite zero -- it
      could weaken ASLR when a privileged task execs a less privileged
      program, but only if program changed bitness across the exec, or the
      child binary was highly unusual or actively malicious.  A child
      program that was compromised after the exec would not have access to
      the leaked base.
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Chang Seok <chang.seok.bae@intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c7d1ddec
    • Jaegeuk Kim's avatar
      f2fs: check hot_data for roll-forward recovery · cc9618c9
      Jaegeuk Kim authored
      commit 125c9fb1 upstream.
      
      We need to check HOT_DATA to truncate any previous data block when doing
      roll-forward recovery.
      Reviewed-by: default avatarChao Yu <yuchao0@huawei.com>
      Signed-off-by: default avatarJaegeuk Kim <jaegeuk@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      cc9618c9
    • Jaegeuk Kim's avatar
      f2fs: let fill_super handle roll-forward errors · 0f90297c
      Jaegeuk Kim authored
      commit afd2b4da upstream.
      
      If we set CP_ERROR_FLAG in roll-forward error, f2fs is no longer to proceed
      any IOs due to f2fs_cp_error(). But, for example, if some stale data is involved
      on roll-forward process, we're able to get -ENOENT, getting fs stuck.
      If we get any error, let fill_super set SBI_NEED_FSCK and try to recover back
      to stable point.
      Reviewed-by: default avatarChao Yu <yuchao0@huawei.com>
      Signed-off-by: default avatarJaegeuk Kim <jaegeuk@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      0f90297c
    • Haishuang Yan's avatar
      ip_tunnel: fix setting ttl and tos value in collect_md mode · 60b94125
      Haishuang Yan authored
      
      [ Upstream commit 0f693f19 ]
      
      ttl and tos variables are declared and assigned, but are not used in
      iptunnel_xmit() function.
      
      Fixes: cfc7381b ("ip_tunnel: add collect_md mode to IPIP tunnel")
      Cc: Alexei Starovoitov <ast@fb.com>
      Signed-off-by: default avatarHaishuang Yan <yanhaishuang@cmss.chinamobile.com>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      60b94125
    • Marcelo Ricardo Leitner's avatar
      sctp: fix missing wake ups in some situations · 3f60dadb
      Marcelo Ricardo Leitner authored
      
      [ Upstream commit 7906b00f ]
      
      Commit fb586f25 ("sctp: delay calls to sk_data_ready() as much as
      possible") minimized the number of wake ups that are triggered in case
      the association receives a packet with multiple data chunks on it and/or
      when io_events are enabled and then commit 0970f5b3 ("sctp: signal
      sk_data_ready earlier on data chunks reception") moved the wake up to as
      soon as possible. It thus relies on the state machine running later to
      clean the flag that the event was already generated.
      
      The issue is that there are 2 call paths that calls
      sctp_ulpq_tail_event() outside of the state machine, causing the flag to
      linger and possibly omitting a needed wake up in the sequence.
      
      One of the call paths is when enabling SCTP_SENDER_DRY_EVENTS via
      setsockopt(SCTP_EVENTS), as noticed by Harald Welte. The other is when
      partial reliability triggers removal of chunks from the send queue when
      the application calls sendmsg().
      
      This commit fixes it by not setting the flag in case the socket is not
      owned by the user, as it won't be cleaned later. This works for
      user-initiated calls and also for rx path processing.
      
      Fixes: fb586f25 ("sctp: delay calls to sk_data_ready() as much as possible")
      Reported-by: default avatarHarald Welte <laforge@gnumonks.org>
      Signed-off-by: default avatarMarcelo Ricardo Leitner <marcelo.leitner@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3f60dadb
    • Eric Dumazet's avatar
      ipv6: fix typo in fib6_net_exit() · bf8ed95d
      Eric Dumazet authored
      
      [ Upstream commit 32a805ba ]
      
      IPv6 FIB should use FIB6_TABLE_HASHSZ, not FIB_TABLE_HASHSZ.
      
      Fixes: ba1cc08d ("ipv6: fix memory leak with multiple tables during netns destruction")
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      bf8ed95d
    • Sabrina Dubroca's avatar
      ipv6: fix memory leak with multiple tables during netns destruction · c9335db7
      Sabrina Dubroca authored
      
      [ Upstream commit ba1cc08d ]
      
      fib6_net_exit only frees the main and local tables. If another table was
      created with fib6_alloc_table, we leak it when the netns is destroyed.
      
      Fix this in the same way ip_fib_net_exit cleans up tables, by walking
      through the whole hashtable of fib6_table's. We can get rid of the
      special cases for local and main, since they're also part of the
      hashtable.
      
      Reproducer:
          ip netns add x
          ip -net x -6 rule add from 6003:1::/64 table 100
          ip netns del x
      Reported-by: default avatarJianlin Shi <jishi@redhat.com>
      Fixes: 58f09b78 ("[NETNS][IPV6] ip6_fib - make it per network namespace")
      Signed-off-by: default avatarSabrina Dubroca <sd@queasysnail.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c9335db7
    • Xin Long's avatar
      ip6_gre: update mtu properly in ip6gre_err · ca7d8a33
      Xin Long authored
      
      [ Upstream commit 5c25f30c ]
      
      Now when probessing ICMPV6_PKT_TOOBIG, ip6gre_err only subtracts the
      offset of gre header from mtu info. The expected mtu of gre device
      should also subtract gre header. Otherwise, the next packets still
      can't be sent out.
      
      Jianlin found this issue when using the topo:
        client(ip6gre)<---->(nic1)route(nic2)<----->(ip6gre)server
      
      and reducing nic2's mtu, then both tcp and sctp's performance with
      big size data became 0.
      
      This patch is to fix it by also subtracting grehdr (tun->tun_hlen)
      from mtu info when updating gre device's mtu in ip6gre_err(). It
      also needs to subtract ETH_HLEN if gre dev'type is ARPHRD_ETHER.
      Reported-by: default avatarJianlin Shi <jishi@redhat.com>
      Signed-off-by: default avatarXin Long <lucien.xin@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ca7d8a33
    • Jason Wang's avatar
      vhost_net: correctly check tx avail during rx busy polling · f5755c0e
      Jason Wang authored
      
      [ Upstream commit 8b949bef ]
      
      We check tx avail through vhost_enable_notify() in the past which is
      wrong since it only checks whether or not guest has filled more
      available buffer since last avail idx synchronization which was just
      done by vhost_vq_avail_empty() before. What we really want is checking
      pending buffers in the avail ring. Fix this by calling
      vhost_vq_avail_empty() instead.
      
      This issue could be noticed by doing netperf TCP_RR benchmark as
      client from guest (but not host). With this fix, TCP_RR from guest to
      localhost restores from 1375.91 trans per sec to 55235.28 trans per
      sec on my laptop (Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz).
      
      Fixes: 03088137 ("vhost_net: basic polling support")
      Signed-off-by: default avatarJason Wang <jasowang@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f5755c0e
    • Claudiu Manoil's avatar
      gianfar: Fix Tx flow control deactivation · 90406e68
      Claudiu Manoil authored
      
      [ Upstream commit 5d621672 ]
      
      The wrong register is checked for the Tx flow control bit,
      it should have been maccfg1 not maccfg2.
      This went unnoticed for so long probably because the impact is
      hardly visible, not to mention the tangled code from adjust_link().
      First, link flow control (i.e. handling of Rx/Tx link level pause frames)
      is disabled by default (needs to be enabled via 'ethtool -A').
      Secondly, maccfg2 always returns 0 for tx_flow_oldval (except for a few
      old boards), which results in Tx flow control remaining always on
      once activated.
      
      Fixes: 45b679c9 ("gianfar: Implement PAUSE frame generation support")
      Signed-off-by: default avatarClaudiu Manoil <claudiu.manoil@nxp.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      90406e68
    • Jesper Dangaard Brouer's avatar
      Revert "net: fix percpu memory leaks" · 1bcf1871
      Jesper Dangaard Brouer authored
      
      [ Upstream commit 5a63643e ]
      
      This reverts commit 1d6119ba.
      
      After reverting commit 6d7b857d ("net: use lib/percpu_counter API
      for fragmentation mem accounting") then here is no need for this
      fix-up patch.  As percpu_counter is no longer used, it cannot
      memory leak it any-longer.
      
      Fixes: 6d7b857d ("net: use lib/percpu_counter API for fragmentation mem accounting")
      Fixes: 1d6119ba ("net: fix percpu memory leaks")
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      1bcf1871
    • Jesper Dangaard Brouer's avatar
      Revert "net: use lib/percpu_counter API for fragmentation mem accounting" · 5a7a40ba
      Jesper Dangaard Brouer authored
      
      [ Upstream commit fb452a1a ]
      
      This reverts commit 6d7b857d.
      
      There is a bug in fragmentation codes use of the percpu_counter API,
      that can cause issues on systems with many CPUs.
      
      The frag_mem_limit() just reads the global counter (fbc->count),
      without considering other CPUs can have upto batch size (130K) that
      haven't been subtracted yet.  Due to the 3MBytes lower thresh limit,
      this become dangerous at >=24 CPUs (3*1024*1024/130000=24).
      
      The correct API usage would be to use __percpu_counter_compare() which
      does the right thing, and takes into account the number of (online)
      CPUs and batch size, to account for this and call __percpu_counter_sum()
      when needed.
      
      We choose to revert the use of the lib/percpu_counter API for frag
      memory accounting for several reasons:
      
      1) On systems with CPUs > 24, the heavier fully locked
         __percpu_counter_sum() is always invoked, which will be more
         expensive than the atomic_t that is reverted to.
      
      Given systems with more than 24 CPUs are becoming common this doesn't
      seem like a good option.  To mitigate this, the batch size could be
      decreased and thresh be increased.
      
      2) The add_frag_mem_limit+sub_frag_mem_limit pairs happen on the RX
         CPU, before SKBs are pushed into sockets on remote CPUs.  Given
         NICs can only hash on L2 part of the IP-header, the NIC-RXq's will
         likely be limited.  Thus, a fair chance that atomic add+dec happen
         on the same CPU.
      
      Revert note that commit 1d6119ba ("net: fix percpu memory leaks")
      removed init_frag_mem_limit() and instead use inet_frags_init_net().
      After this revert, inet_frags_uninit_net() becomes empty.
      
      Fixes: 6d7b857d ("net: use lib/percpu_counter API for fragmentation mem accounting")
      Fixes: 1d6119ba ("net: fix percpu memory leaks")
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Acked-by: default avatarFlorian Westphal <fw@strlen.de>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      5a7a40ba
    • Ido Schimmel's avatar
      bridge: switchdev: Clear forward mark when transmitting packet · b5a3ae8b
      Ido Schimmel authored
      
      [ Upstream commit 79e99bdd ]
      
      Commit 6bc506b4 ("bridge: switchdev: Add forward mark support for
      stacked devices") added the 'offload_fwd_mark' bit to the skb in order
      to allow drivers to indicate to the bridge driver that they already
      forwarded the packet in L2.
      
      In case the bit is set, before transmitting the packet from each port,
      the port's mark is compared with the mark stored in the skb's control
      block. If both marks are equal, we know the packet arrived from a switch
      device that already forwarded the packet and it's not re-transmitted.
      
      However, if the packet is transmitted from the bridge device itself
      (e.g., br0), we should clear the 'offload_fwd_mark' bit as the mark
      stored in the skb's control block isn't valid.
      
      This scenario can happen in rare cases where a packet was trapped during
      L3 forwarding and forwarded by the kernel to a bridge device.
      
      Fixes: 6bc506b4 ("bridge: switchdev: Add forward mark support for stacked devices")
      Signed-off-by: default avatarIdo Schimmel <idosch@mellanox.com>
      Reported-by: default avatarYotam Gigi <yotamg@mellanox.com>
      Tested-by: default avatarYotam Gigi <yotamg@mellanox.com>
      Reviewed-by: default avatarJiri Pirko <jiri@mellanox.com>
      Acked-by: default avatarNikolay Aleksandrov <nikolay@cumulusnetworks.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b5a3ae8b
    • Ido Schimmel's avatar
      mlxsw: spectrum: Forbid linking to devices that have uppers · 73ee5a73
      Ido Schimmel authored
      
      [ Upstream commit 25cc72a3 ]
      
      The mlxsw driver relies on NETDEV_CHANGEUPPER events to configure the
      device in case a port is enslaved to a master netdev such as bridge or
      bond.
      
      Since the driver ignores events unrelated to its ports and their
      uppers, it's possible to engineer situations in which the device's data
      path differs from the kernel's.
      
      One example to such a situation is when a port is enslaved to a bond
      that is already enslaved to a bridge. When the bond was enslaved the
      driver ignored the event - as the bond wasn't one of its uppers - and
      therefore a bridge port instance isn't created in the device.
      
      Until such configurations are supported forbid them by checking that the
      upper device doesn't have uppers of its own.
      
      Fixes: 0d65fc13 ("mlxsw: spectrum: Implement LAG port join/leave")
      Signed-off-by: default avatarIdo Schimmel <idosch@mellanox.com>
      Reported-by: default avatarNogah Frankel <nogahf@mellanox.com>
      Tested-by: default avatarNogah Frankel <nogahf@mellanox.com>
      Signed-off-by: default avatarJiri Pirko <jiri@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      73ee5a73
    • Wei Wang's avatar
      tcp: initialize rcv_mss to TCP_MIN_MSS instead of 0 · a10c5101
      Wei Wang authored
      
      [ Upstream commit 499350a5 ]
      
      When tcp_disconnect() is called, inet_csk_delack_init() sets
      icsk->icsk_ack.rcv_mss to 0.
      This could potentially cause tcp_recvmsg() => tcp_cleanup_rbuf() =>
      __tcp_select_window() call path to have division by 0 issue.
      So this patch initializes rcv_mss to TCP_MIN_MSS instead of 0.
      Reported-by: default avatarAndrey Konovalov  <andreyknvl@google.com>
      Signed-off-by: default avatarWei Wang <weiwan@google.com>
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarNeal Cardwell <ncardwell@google.com>
      Signed-off-by: default avatarYuchung Cheng <ycheng@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a10c5101
    • Florian Fainelli's avatar
      Revert "net: phy: Correctly process PHY_HALTED in phy_stop_machine()" · a6e51fda
      Florian Fainelli authored
      
      [ Upstream commit ebc8254a ]
      
      This reverts commit 7ad813f2 ("net: phy:
      Correctly process PHY_HALTED in phy_stop_machine()") because it is
      creating the possibility for a NULL pointer dereference.
      
      David Daney provide the following call trace and diagram of events:
      
      When ndo_stop() is called we call:
      
       phy_disconnect()
          +---> phy_stop_interrupts() implies: phydev->irq = PHY_POLL;
          +---> phy_stop_machine()
          |      +---> phy_state_machine()
          |              +----> queue_delayed_work(): Work queued.
          +--->phy_detach() implies: phydev->attached_dev = NULL;
      
      Now at a later time the queued work does:
      
       phy_state_machine()
          +---->netif_carrier_off(phydev->attached_dev): Oh no! It is NULL:
      
       CPU 12 Unable to handle kernel paging request at virtual address
      0000000000000048, epc == ffffffff80de37ec, ra == ffffffff80c7c
      Oops[#1]:
      CPU: 12 PID: 1502 Comm: kworker/12:1 Not tainted 4.9.43-Cavium-Octeon+ #1
      Workqueue: events_power_efficient phy_state_machine
      task: 80000004021ed100 task.stack: 8000000409d70000
      $ 0   : 0000000000000000 ffffffff84720060 0000000000000048 0000000000000004
      $ 4   : 0000000000000000 0000000000000001 0000000000000004 0000000000000000
      $ 8   : 0000000000000000 0000000000000000 00000000ffff98f3 0000000000000000
      $12   : 8000000409d73fe0 0000000000009c00 ffffffff846547c8 000000000000af3b
      $16   : 80000004096bab68 80000004096babd0 0000000000000000 80000004096ba800
      $20   : 0000000000000000 0000000000000000 ffffffff81090000 0000000000000008
      $24   : 0000000000000061 ffffffff808637b0
      $28   : 8000000409d70000 8000000409d73cf0 80000000271bd300 ffffffff80c7804c
      Hi    : 000000000000002a
      Lo    : 000000000000003f
      epc   : ffffffff80de37ec netif_carrier_off+0xc/0x58
      ra    : ffffffff80c7804c phy_state_machine+0x48c/0x4f8
      Status: 14009ce3        KX SX UX KERNEL EXL IE
      Cause : 00800008 (ExcCode 02)
      BadVA : 0000000000000048
      PrId  : 000d9501 (Cavium Octeon III)
      Modules linked in:
      Process kworker/12:1 (pid: 1502, threadinfo=8000000409d70000,
      task=80000004021ed100, tls=0000000000000000)
      Stack : 8000000409a54000 80000004096bab68 80000000271bd300 80000000271c1e00
              0000000000000000 ffffffff808a1708 8000000409a54000 80000000271bd300
              80000000271bd320 8000000409a54030 ffffffff80ff0f00 0000000000000001
              ffffffff81090000 ffffffff808a1ac0 8000000402182080 ffffffff84650000
              8000000402182080 ffffffff84650000 ffffffff80ff0000 8000000409a54000
              ffffffff808a1970 0000000000000000 80000004099e8000 8000000402099240
              0000000000000000 ffffffff808a8598 0000000000000000 8000000408eeeb00
              8000000409a54000 00000000810a1d00 0000000000000000 8000000409d73de8
              8000000409d73de8 0000000000000088 000000000c009c00 8000000409d73e08
              8000000409d73e08 8000000402182080 ffffffff808a84d0 8000000402182080
              ...
      Call Trace:
      [<ffffffff80de37ec>] netif_carrier_off+0xc/0x58
      [<ffffffff80c7804c>] phy_state_machine+0x48c/0x4f8
      [<ffffffff808a1708>] process_one_work+0x158/0x368
      [<ffffffff808a1ac0>] worker_thread+0x150/0x4c0
      [<ffffffff808a8598>] kthread+0xc8/0xe0
      [<ffffffff808617f0>] ret_from_kernel_thread+0x14/0x1c
      
      The original motivation for this change originated from Marc Gonzales
      indicating that his network driver did not have its adjust_link callback
      executing with phydev->link = 0 while he was expecting it.
      
      PHYLIB has never made any such guarantees ever because phy_stop() merely just
      tells the workqueue to move into PHY_HALTED state which will happen
      asynchronously.
      Reported-by: default avatarGeert Uytterhoeven <geert+renesas@glider.be>
      Reported-by: default avatarDavid Daney <ddaney.cavm@gmail.com>
      Fixes: 7ad813f2 ("net: phy: Correctly process PHY_HALTED in phy_stop_machine()")
      Signed-off-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a6e51fda
    • Eric Dumazet's avatar
      kcm: do not attach PF_KCM sockets to avoid deadlock · af33da0e
      Eric Dumazet authored
      
      [ Upstream commit 351050ec ]
      
      syzkaller had no problem to trigger a deadlock, attaching a KCM socket
      to another one (or itself). (original syzkaller report was a very
      confusing lockdep splat during a sendmsg())
      
      It seems KCM claims to only support TCP, but no enforcement is done,
      so we might need to add additional checks.
      
      Fixes: ab7ac4eb ("kcm: Kernel Connection Multiplexor module")
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Reported-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Acked-by: default avatarTom Herbert <tom@quantonium.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      af33da0e
    • Benjamin Poirier's avatar
      packet: Don't write vnet header beyond end of buffer · 8c623e5d
      Benjamin Poirier authored
      
      [ Upstream commit edbd58be ]
      
      ... which may happen with certain values of tp_reserve and maclen.
      
      Fixes: 58d19b19 ("packet: vnet_hdr support for tpacket_rcv")
      Signed-off-by: default avatarBenjamin Poirier <bpoirier@suse.com>
      Cc: Willem de Bruijn <willemb@google.com>
      Acked-by: default avatarWillem de Bruijn <willemb@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      8c623e5d
    • Stefano Brivio's avatar
      cxgb4: Fix stack out-of-bounds read due to wrong size to t4_record_mbox() · 2b3bd597
      Stefano Brivio authored
      
      [ Upstream commit 0f308686 ]
      
      Passing commands for logging to t4_record_mbox() with size
      MBOX_LEN, when the actual command size is actually smaller,
      causes out-of-bounds stack accesses in t4_record_mbox() while
      copying command words here:
      
      	for (i = 0; i < size / 8; i++)
      		entry->cmd[i] = be64_to_cpu(cmd[i]);
      
      Up to 48 bytes from the stack are then leaked to debugfs.
      
      This happens whenever we send (and log) commands described by
      structs fw_sched_cmd (32 bytes leaked), fw_vi_rxmode_cmd (48),
      fw_hello_cmd (48), fw_bye_cmd (48), fw_initialize_cmd (48),
      fw_reset_cmd (48), fw_pfvf_cmd (32), fw_eq_eth_cmd (16),
      fw_eq_ctrl_cmd (32), fw_eq_ofld_cmd (32), fw_acl_mac_cmd(16),
      fw_rss_glb_config_cmd(32), fw_rss_vi_config_cmd(32),
      fw_devlog_cmd(32), fw_vi_enable_cmd(48), fw_port_cmd(32),
      fw_sched_cmd(32), fw_devlog_cmd(32).
      
      The cxgb4vf driver got this right instead.
      
      When we call t4_record_mbox() to log a command reply, a MBOX_LEN
      size can be used though, as get_mbox_rpl() will fill cmd_rpl up
      completely.
      
      Fixes: 7f080c3f ("cxgb4: Add support to enable logging of firmware mailbox commands")
      Signed-off-by: default avatarStefano Brivio <sbrivio@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      2b3bd597
    • stephen hemminger's avatar
      netvsc: fix deadlock betwen link status and removal · de2ecec2
      stephen hemminger authored
      
      [ Upstream commit 9b4e946c ]
      
      There is a deadlock possible when canceling the link status
      delayed work queue. The removal process is run with RTNL held,
      and the link status callback is acquring RTNL.
      
      Resolve the issue by using trylock and rescheduling.
      If cancel is in process, that block it from happening.
      
      Fixes: 122a5f64 ("staging: hv: use delayed_work for netvsc_send_garp()")
      Signed-off-by: default avatarStephen Hemminger <sthemmin@microsoft.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      de2ecec2
    • Arnd Bergmann's avatar
      qlge: avoid memcpy buffer overflow · 64dfc675
      Arnd Bergmann authored
      
      [ Upstream commit e58f9583 ]
      
      gcc-8.0.0 (snapshot) points out that we copy a variable-length string
      into a fixed length field using memcpy() with the destination length,
      and that ends up copying whatever follows the string:
      
          inlined from 'ql_core_dump' at drivers/net/ethernet/qlogic/qlge/qlge_dbg.c:1106:2:
      drivers/net/ethernet/qlogic/qlge/qlge_dbg.c:708:2: error: 'memcpy' reading 15 bytes from a region of size 14 [-Werror=stringop-overflow=]
        memcpy(seg_hdr->description, desc, (sizeof(seg_hdr->description)) - 1);
      
      Changing it to use strncpy() will instead zero-pad the destination,
      which seems to be the right thing to do here.
      
      The bug is probably harmless, but it seems like a good idea to address
      it in stable kernels as well, if only for the purpose of building with
      gcc-8 without warnings.
      
      Fixes: a61f8026 ("qlge: Add ethtool register dump function.")
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      64dfc675
    • Stefano Brivio's avatar
      sctp: Avoid out-of-bounds reads from address storage · 08d56d8a
      Stefano Brivio authored
      
      [ Upstream commit ee6c88bb ]
      
      inet_diag_msg_sctp{,l}addr_fill() and sctp_get_sctp_info() copy
      sizeof(sockaddr_storage) bytes to fill in sockaddr structs used
      to export diagnostic information to userspace.
      
      However, the memory allocated to store sockaddr information is
      smaller than that and depends on the address family, so we leak
      up to 100 uninitialized bytes to userspace. Just use the size of
      the source structs instead, in all the three cases this is what
      userspace expects. Zero out the remaining memory.
      
      Unused bytes (i.e. when IPv4 addresses are used) in source
      structs sctp_sockaddr_entry and sctp_transport are already
      cleared by sctp_add_bind_addr() and sctp_transport_new(),
      respectively.
      
      Noticed while testing KASAN-enabled kernel with 'ss':
      
      [ 2326.885243] BUG: KASAN: slab-out-of-bounds in inet_sctp_diag_fill+0x42c/0x6c0 [sctp_diag] at addr ffff881be8779800
      [ 2326.896800] Read of size 128 by task ss/9527
      [ 2326.901564] CPU: 0 PID: 9527 Comm: ss Not tainted 4.11.0-22.el7a.x86_64 #1
      [ 2326.909236] Hardware name: Dell Inc. PowerEdge R730/072T6D, BIOS 2.4.3 01/17/2017
      [ 2326.917585] Call Trace:
      [ 2326.920312]  dump_stack+0x63/0x8d
      [ 2326.924014]  kasan_object_err+0x21/0x70
      [ 2326.928295]  kasan_report+0x288/0x540
      [ 2326.932380]  ? inet_sctp_diag_fill+0x42c/0x6c0 [sctp_diag]
      [ 2326.938500]  ? skb_put+0x8b/0xd0
      [ 2326.942098]  ? memset+0x31/0x40
      [ 2326.945599]  check_memory_region+0x13c/0x1a0
      [ 2326.950362]  memcpy+0x23/0x50
      [ 2326.953669]  inet_sctp_diag_fill+0x42c/0x6c0 [sctp_diag]
      [ 2326.959596]  ? inet_diag_msg_sctpasoc_fill+0x460/0x460 [sctp_diag]
      [ 2326.966495]  ? __lock_sock+0x102/0x150
      [ 2326.970671]  ? sock_def_wakeup+0x60/0x60
      [ 2326.975048]  ? remove_wait_queue+0xc0/0xc0
      [ 2326.979619]  sctp_diag_dump+0x44a/0x760 [sctp_diag]
      [ 2326.985063]  ? sctp_ep_dump+0x280/0x280 [sctp_diag]
      [ 2326.990504]  ? memset+0x31/0x40
      [ 2326.994007]  ? mutex_lock+0x12/0x40
      [ 2326.997900]  __inet_diag_dump+0x57/0xb0 [inet_diag]
      [ 2327.003340]  ? __sys_sendmsg+0x150/0x150
      [ 2327.007715]  inet_diag_dump+0x4d/0x80 [inet_diag]
      [ 2327.012979]  netlink_dump+0x1e6/0x490
      [ 2327.017064]  __netlink_dump_start+0x28e/0x2c0
      [ 2327.021924]  inet_diag_handler_cmd+0x189/0x1a0 [inet_diag]
      [ 2327.028045]  ? inet_diag_rcv_msg_compat+0x1b0/0x1b0 [inet_diag]
      [ 2327.034651]  ? inet_diag_dump_compat+0x190/0x190 [inet_diag]
      [ 2327.040965]  ? __netlink_lookup+0x1b9/0x260
      [ 2327.045631]  sock_diag_rcv_msg+0x18b/0x1e0
      [ 2327.050199]  netlink_rcv_skb+0x14b/0x180
      [ 2327.054574]  ? sock_diag_bind+0x60/0x60
      [ 2327.058850]  sock_diag_rcv+0x28/0x40
      [ 2327.062837]  netlink_unicast+0x2e7/0x3b0
      [ 2327.067212]  ? netlink_attachskb+0x330/0x330
      [ 2327.071975]  ? kasan_check_write+0x14/0x20
      [ 2327.076544]  netlink_sendmsg+0x5be/0x730
      [ 2327.080918]  ? netlink_unicast+0x3b0/0x3b0
      [ 2327.085486]  ? kasan_check_write+0x14/0x20
      [ 2327.090057]  ? selinux_socket_sendmsg+0x24/0x30
      [ 2327.095109]  ? netlink_unicast+0x3b0/0x3b0
      [ 2327.099678]  sock_sendmsg+0x74/0x80
      [ 2327.103567]  ___sys_sendmsg+0x520/0x530
      [ 2327.107844]  ? __get_locked_pte+0x178/0x200
      [ 2327.112510]  ? copy_msghdr_from_user+0x270/0x270
      [ 2327.117660]  ? vm_insert_page+0x360/0x360
      [ 2327.122133]  ? vm_insert_pfn_prot+0xb4/0x150
      [ 2327.126895]  ? vm_insert_pfn+0x32/0x40
      [ 2327.131077]  ? vvar_fault+0x71/0xd0
      [ 2327.134968]  ? special_mapping_fault+0x69/0x110
      [ 2327.140022]  ? __do_fault+0x42/0x120
      [ 2327.144008]  ? __handle_mm_fault+0x1062/0x17a0
      [ 2327.148965]  ? __fget_light+0xa7/0xc0
      [ 2327.153049]  __sys_sendmsg+0xcb/0x150
      [ 2327.157133]  ? __sys_sendmsg+0xcb/0x150
      [ 2327.161409]  ? SyS_shutdown+0x140/0x140
      [ 2327.165688]  ? exit_to_usermode_loop+0xd0/0xd0
      [ 2327.170646]  ? __do_page_fault+0x55d/0x620
      [ 2327.175216]  ? __sys_sendmsg+0x150/0x150
      [ 2327.179591]  SyS_sendmsg+0x12/0x20
      [ 2327.183384]  do_syscall_64+0xe3/0x230
      [ 2327.187471]  entry_SYSCALL64_slow_path+0x25/0x25
      [ 2327.192622] RIP: 0033:0x7f41d18fa3b0
      [ 2327.196608] RSP: 002b:00007ffc3b731218 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
      [ 2327.205055] RAX: ffffffffffffffda RBX: 00007ffc3b731380 RCX: 00007f41d18fa3b0
      [ 2327.213017] RDX: 0000000000000000 RSI: 00007ffc3b731340 RDI: 0000000000000003
      [ 2327.220978] RBP: 0000000000000002 R08: 0000000000000004 R09: 0000000000000040
      [ 2327.228939] R10: 00007ffc3b730f30 R11: 0000000000000246 R12: 0000000000000003
      [ 2327.236901] R13: 00007ffc3b731340 R14: 00007ffc3b7313d0 R15: 0000000000000084
      [ 2327.244865] Object at ffff881be87797e0, in cache kmalloc-64 size: 64
      [ 2327.251953] Allocated:
      [ 2327.254581] PID = 9484
      [ 2327.257215]  save_stack_trace+0x1b/0x20
      [ 2327.261485]  save_stack+0x46/0xd0
      [ 2327.265179]  kasan_kmalloc+0xad/0xe0
      [ 2327.269165]  kmem_cache_alloc_trace+0xe6/0x1d0
      [ 2327.274138]  sctp_add_bind_addr+0x58/0x180 [sctp]
      [ 2327.279400]  sctp_do_bind+0x208/0x310 [sctp]
      [ 2327.284176]  sctp_bind+0x61/0xa0 [sctp]
      [ 2327.288455]  inet_bind+0x5f/0x3a0
      [ 2327.292151]  SYSC_bind+0x1a4/0x1e0
      [ 2327.295944]  SyS_bind+0xe/0x10
      [ 2327.299349]  do_syscall_64+0xe3/0x230
      [ 2327.303433]  return_from_SYSCALL_64+0x0/0x6a
      [ 2327.308194] Freed:
      [ 2327.310434] PID = 4131
      [ 2327.313065]  save_stack_trace+0x1b/0x20
      [ 2327.317344]  save_stack+0x46/0xd0
      [ 2327.321040]  kasan_slab_free+0x73/0xc0
      [ 2327.325220]  kfree+0x96/0x1a0
      [ 2327.328530]  dynamic_kobj_release+0x15/0x40
      [ 2327.333195]  kobject_release+0x99/0x1e0
      [ 2327.337472]  kobject_put+0x38/0x70
      [ 2327.341266]  free_notes_attrs+0x66/0x80
      [ 2327.345545]  mod_sysfs_teardown+0x1a5/0x270
      [ 2327.350211]  free_module+0x20/0x2a0
      [ 2327.354099]  SyS_delete_module+0x2cb/0x2f0
      [ 2327.358667]  do_syscall_64+0xe3/0x230
      [ 2327.362750]  return_from_SYSCALL_64+0x0/0x6a
      [ 2327.367510] Memory state around the buggy address:
      [ 2327.372855]  ffff881be8779700: fc fc fc fc 00 00 00 00 00 00 00 00 fc fc fc fc
      [ 2327.380914]  ffff881be8779780: fb fb fb fb fb fb fb fb fc fc fc fc 00 00 00 00
      [ 2327.388972] >ffff881be8779800: 00 00 00 00 fc fc fc fc fb fb fb fb fb fb fb fb
      [ 2327.397031]                                ^
      [ 2327.401792]  ffff881be8779880: fc fc fc fc fb fb fb fb fb fb fb fb fc fc fc fc
      [ 2327.409850]  ffff881be8779900: 00 00 00 00 00 04 fc fc fc fc fc fc 00 00 00 00
      [ 2327.417907] ==================================================================
      
      This fixes CVE-2017-7558.
      
      References: https://bugzilla.redhat.com/show_bug.cgi?id=1480266
      Fixes: 8f840e47 ("sctp: add the sctp_diag.c file")
      Cc: Xin Long <lucien.xin@gmail.com>
      Cc: Vlad Yasevich <vyasevich@gmail.com>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Signed-off-by: default avatarStefano Brivio <sbrivio@redhat.com>
      Acked-by: default avatarMarcelo Ricardo Leitner <marcelo.leitner@gmail.com>
      Reviewed-by: default avatarXin Long <lucien.xin@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      08d56d8a
    • Florian Fainelli's avatar
      fsl/man: Inherit parent device and of_node · 4d8ee193
      Florian Fainelli authored
      
      [ Upstream commit a1a50c8e ]
      
      Junote Cai reported that he was not able to get a DSA setup involving the
      Freescale DPAA/FMAN driver to work and narrowed it down to
      of_find_net_device_by_node(). This function requires the network device's
      device reference to be correctly set which is the case here, though we have
      lost any device_node association there.
      
      The problem is that dpaa_eth_add_device() allocates a "dpaa-ethernet" platform
      device, and later on dpaa_eth_probe() is called but SET_NETDEV_DEV() won't be
      propagating &pdev->dev.of_node properly. Fix this by inherenting both the parent
      device and the of_node when dpaa_eth_add_device() creates the platform device.
      
      Fixes: 39339616 ("fsl/fman: Add FMan MAC driver")
      Signed-off-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4d8ee193
    • Eric Dumazet's avatar
      udp: on peeking bad csum, drop packets even if not at head · 1e39e5c6
      Eric Dumazet authored
      
      [ Upstream commit fd6055a8 ]
      
      When peeking, if a bad csum is discovered, the skb is unlinked from
      the queue with __sk_queue_drop_skb and the peek operation restarted.
      
      __sk_queue_drop_skb only drops packets that match the queue head.
      
      This fails if the skb was found after the head, using SO_PEEK_OFF
      socket option. This causes an infinite loop.
      
      We MUST drop this problematic skb, and we can simply check if skb was
      already removed by another thread, by looking at skb->next :
      
      This pointer is set to NULL by the  __skb_unlink() operation, that might
      have happened only under the spinlock protection.
      
      Many thanks to syzkaller team (and particularly Dmitry Vyukov who
      provided us nice C reproducers exhibiting the lockup) and Willem de
      Bruijn who provided first version for this patch and a test program.
      
      Fixes: 627d2d6b ("udp: enable MSG_PEEK at non-zero offset")
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Reported-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Cc: Willem de Bruijn <willemb@google.com>
      Acked-by: default avatarPaolo Abeni <pabeni@redhat.com>
      Acked-by: default avatarWillem de Bruijn <willemb@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      1e39e5c6
    • Sabrina Dubroca's avatar
      macsec: add genl family module alias · 4b4a194a
      Sabrina Dubroca authored
      
      [ Upstream commit 78362998 ]
      
      This helps tools such as wpa_supplicant can start even if the macsec
      module isn't loaded yet.
      
      Fixes: c09440f7 ("macsec: introduce IEEE 802.1AE driver")
      Signed-off-by: default avatarSabrina Dubroca <sd@queasysnail.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4b4a194a
    • Wei Wang's avatar
      ipv6: fix sparse warning on rt6i_node · 43c792a8
      Wei Wang authored
      
      [ Upstream commit 4e587ea7 ]
      
      Commit c5cff856 adds rcu grace period before freeing fib6_node. This
      generates a new sparse warning on rt->rt6i_node related code:
        net/ipv6/route.c:1394:30: error: incompatible types in comparison
        expression (different address spaces)
        ./include/net/ip6_fib.h:187:14: error: incompatible types in comparison
        expression (different address spaces)
      
      This commit adds "__rcu" tag for rt6i_node and makes sure corresponding
      rcu API is used for it.
      After this fix, sparse no longer generates the above warning.
      
      Fixes: c5cff856 ("ipv6: add rcu grace period before freeing fib6_node")
      Signed-off-by: default avatarWei Wang <weiwan@google.com>
      Acked-by: default avatarEric Dumazet <edumazet@google.com>
      Acked-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      43c792a8
    • Wei Wang's avatar
      ipv6: add rcu grace period before freeing fib6_node · 7f8f23fc
      Wei Wang authored
      
      [ Upstream commit c5cff856 ]
      
      We currently keep rt->rt6i_node pointing to the fib6_node for the route.
      And some functions make use of this pointer to dereference the fib6_node
      from rt structure, e.g. rt6_check(). However, as there is neither
      refcount nor rcu taken when dereferencing rt->rt6i_node, it could
      potentially cause crashes as rt->rt6i_node could be set to NULL by other
      CPUs when doing a route deletion.
      This patch introduces an rcu grace period before freeing fib6_node and
      makes sure the functions that dereference it takes rcu_read_lock().
      
      Note: there is no "Fixes" tag because this bug was there in a very
      early stage.
      Signed-off-by: default avatarWei Wang <weiwan@google.com>
      Acked-by: default avatarEric Dumazet <edumazet@google.com>
      Acked-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7f8f23fc