1. 17 May, 2018 17 commits
    • David S. Miller's avatar
      Merge tag 'wireless-drivers-next-for-davem-2018-05-17' of... · a564b659
      David S. Miller authored
      Merge tag 'wireless-drivers-next-for-davem-2018-05-17' of git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next
      
      Kalle Valo says:
      
      ====================
      wireless-drivers-next patches for 4.18
      
      The first pull request for 4.18. As usual new features and bug fixes
      but nothing really special.
      
      I also merged wireless-drivers due to an iwlwifi patch dependency.
      
      Major changes:
      
      iwlwifi
      
      * implement Traffic Condition Monitor and use it for scan, BT coex and
        to detect when the AP doesn't support UAPSD properly
      
      * some more work for the 22000 family of devices;
      
      * introduce AMSDU rate control offload
      
      qtnfmac
      
      * DFS offload support
      
      rsi
      
      * roaming enhancements
      
      * increase max supported aggregation subframes
      
      * don't advertise 5 GHz support if the device doesn't support it
      
      brcmfmac
      
      * add support for BCM4366E chipset
      
      * add support for bcm43364 wireless chipset
      
      ath10k
      
      * enable temperature reads for QCA6174 and QCA9377
      
      * add firmware memory dump support for QCA9984
      
      * continue adding WCN3990 support via SNOC bus
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a564b659
    • YueHaibing's avatar
      vmxnet3: Replace msleep(1) with usleep_range() · 93c65d13
      YueHaibing authored
      As documented in Documentation/timers/timers-howto.txt,
      replace msleep(1) with usleep_range().
      Signed-off-by: default avatarYueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      93c65d13
    • Tonghao Zhang's avatar
      bonding: introduce link change helper · 7e878b60
      Tonghao Zhang authored
      Introduce an new common helper to avoid redundancy.
      Signed-off-by: default avatarTonghao Zhang <xiangxia.m.yue@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      7e878b60
    • David S. Miller's avatar
      Merge branch 'tcp-default-RACK-loss-recovery' · 10e361e1
      David S. Miller authored
      Yuchung Cheng says:
      
      ====================
      tcp: default RACK loss recovery
      
      This patch set implements the features correspond to the
      draft-ietf-tcpm-rack-03 version of the RACK draft.
      https://datatracker.ietf.org/meeting/101/materials/slides-101-tcpm-update-on-tcp-rack-00
      
      1. SACK: implement equivalent DUPACK threshold heuristic in RACK to
         replace existing RFC6675 recovery (tcp_mark_head_lost).
      
      2. Non-SACK: simplify RFC6582 NewReno implementation
      
      3. RTO: apply RACK's time-based approach to avoid spuriouly
         marking very recently sent packets lost.
      
      4. with (1)(2)(3), make RACK the exclusive fast recovery mechanism to
         mark losses based on time on S/ACK. Tail loss probe and F-RTO remain
         enabled by default as complementary mechanisms to send probes in
         CA_Open and CA_Loss states. The probes would solicit S/ACKs to trigger
         RACK time-based loss detection.
      
      All Google web and internal servers have been running RACK-only mode
      (4) for a while now. a/b experiments indicate RACK/TLP on average
      reduces recovery latency by 10% compared to RFC6675. RFC6675
      is default-off now but can be enabled by disabling RACK (sysctl
      net.ipv4.tcp_recovery=0) for unseen issues.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      10e361e1
    • Yuchung Cheng's avatar
      tcp: don't mark recently sent packets lost on RTO · 56f8c5d7
      Yuchung Cheng authored
      An RTO event indicates the head has not been acked for a long time
      after its last (re)transmission. But the other packets are not
      necessarily lost if they have been only sent recently (for example
      due to application limit). This patch would prohibit marking packets
      sent within an RTT to be lost on RTO event, using similar logic in
      TCP RACK detection.
      
      Normally the head (SND.UNA) would be marked lost since RTO should
      fire strictly after the head was sent. An exception is when the
      most recent RACK RTT measurement is larger than the (previous)
      RTO. To address this exception the head is always marked lost.
      
      Congestion control interaction: since we may not mark every packet
      lost, the congestion window may be more than 1 (inflight plus 1).
      But only one packet will be retransmitted after RTO, since
      tcp_retransmit_timer() calls tcp_retransmit_skb(...,segs=1). The
      connection still performs slow start from one packet (with Cubic
      congestion control).
      
      This commit was tested in an A/B test with Google web servers,
      and showed a reduction of 2% in (spurious) retransmits post
      timeout (SlowStartRetrans), and correspondingly reduced DSACKs
      (DSACKIgnoredOld) by 7%.
      Signed-off-by: default avatarYuchung Cheng <ycheng@google.com>
      Signed-off-by: default avatarNeal Cardwell <ncardwell@google.com>
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Reviewed-by: default avatarSoheil Hassas Yeganeh <soheil@google.com>
      Reviewed-by: default avatarPriyaranjan Jha <priyarjha@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      56f8c5d7
    • Yuchung Cheng's avatar
      tcp: new helper tcp_rack_skb_timeout · b8fef65a
      Yuchung Cheng authored
      Create and export a new helper tcp_rack_skb_timeout and move tcp_is_rack
      to prepare the final RTO change.
      Signed-off-by: default avatarYuchung Cheng <ycheng@google.com>
      Signed-off-by: default avatarNeal Cardwell <ncardwell@google.com>
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Reviewed-by: default avatarSoheil Hassas Yeganeh <soheil@google.com>
      Reviewed-by: default avatarPriyaranjan Jha <priyarjha@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b8fef65a
    • Yuchung Cheng's avatar
      tcp: separate loss marking and state update on RTO · c77d62ff
      Yuchung Cheng authored
      Previously when TCP times out, it first updates cwnd and ssthresh,
      marks packets lost, and then updates congestion state again. This
      was fine because everything not yet delivered is marked lost,
      so the inflight is always 0 and cwnd can be safely set to 1 to
      retransmit one packet on timeout.
      
      But the inflight may not always be 0 on timeout if TCP changes to
      mark packets lost based on packet sent time. Therefore we must
      first mark the packet lost, then set the cwnd based on the
      (updated) inflight.
      
      This is not a pure refactor. Congestion control may potentially
      break if it uses (not yet updated) inflight to compute ssthresh.
      Fortunately all existing congestion control modules does not do that.
      Also it changes the inflight when CA_LOSS_EVENT is called, and only
      westwood processes such an event but does not use inflight.
      
      This change has two other minor side benefits:
      1) consistent with Fast Recovery s.t. the inflight is updated
         first before tcp_enter_recovery flips state to CA_Recovery.
      
      2) avoid intertwining loss marking with state update, making the
         code more readable.
      Signed-off-by: default avatarYuchung Cheng <ycheng@google.com>
      Signed-off-by: default avatarNeal Cardwell <ncardwell@google.com>
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Reviewed-by: default avatarSoheil Hassas Yeganeh <soheil@google.com>
      Reviewed-by: default avatarPriyaranjan Jha <priyarjha@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c77d62ff
    • Yuchung Cheng's avatar
      tcp: new helper tcp_timeout_mark_lost · 2ad55f56
      Yuchung Cheng authored
      Refactor using a new helper, tcp_timeout_mark_loss(), that marks packets
      lost upon RTO.
      Signed-off-by: default avatarYuchung Cheng <ycheng@google.com>
      Signed-off-by: default avatarNeal Cardwell <ncardwell@google.com>
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Reviewed-by: default avatarSoheil Hassas Yeganeh <soheil@google.com>
      Reviewed-by: default avatarPriyaranjan Jha <priyarjha@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      2ad55f56
    • Yuchung Cheng's avatar
      tcp: account lost retransmit after timeout · d716bfdb
      Yuchung Cheng authored
      The previous approach for the lost and retransmit bits was to
      wipe the slate clean: zero all the lost and retransmit bits,
      correspondingly zero the lost_out and retrans_out counters, and
      then add back the lost bits (and correspondingly increment lost_out).
      
      The new approach is to treat this very much like marking packets
      lost in fast recovery. We don’t wipe the slate clean. We just say
      that for all packets that were not yet marked sacked or lost, we now
      mark them as lost in exactly the same way we do for fast recovery.
      
      This fixes the lost retransmit accounting at RTO time and greatly
      simplifies the RTO code by sharing much of the logic with Fast
      Recovery.
      Signed-off-by: default avatarYuchung Cheng <ycheng@google.com>
      Signed-off-by: default avatarNeal Cardwell <ncardwell@google.com>
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Reviewed-by: default avatarSoheil Hassas Yeganeh <soheil@google.com>
      Reviewed-by: default avatarPriyaranjan Jha <priyarjha@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d716bfdb
    • Yuchung Cheng's avatar
      tcp: simpler NewReno implementation · 6ac06ecd
      Yuchung Cheng authored
      This is a rewrite of NewReno loss recovery implementation that is
      simpler and standalone for readability and better performance by
      using less states.
      
      Note that NewReno refers to RFC6582 as a modification to the fast
      recovery algorithm. It is used only if the connection does not
      support SACK in Linux. It should not to be confused with the Reno
      (AIMD) congestion control.
      Signed-off-by: default avatarYuchung Cheng <ycheng@google.com>
      Signed-off-by: default avatarNeal Cardwell <ncardwell@google.com>
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Reviewed-by: default avatarSoheil Hassas Yeganeh <soheil@google.com>
      Reviewed-by: default avatarPriyaranjan Jha <priyarjha@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6ac06ecd
    • Yuchung Cheng's avatar
      tcp: disable RFC6675 loss detection · b38a51fe
      Yuchung Cheng authored
      This patch disables RFC6675 loss detection and make sysctl
      net.ipv4.tcp_recovery = 1 controls a binary choice between RACK
      (1) or RFC6675 (0).
      Signed-off-by: default avatarYuchung Cheng <ycheng@google.com>
      Signed-off-by: default avatarNeal Cardwell <ncardwell@google.com>
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Reviewed-by: default avatarSoheil Hassas Yeganeh <soheil@google.com>
      Reviewed-by: default avatarPriyaranjan Jha <priyarjha@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b38a51fe
    • Yuchung Cheng's avatar
      tcp: support DUPACK threshold in RACK · 20b654df
      Yuchung Cheng authored
      This patch adds support for the classic DUPACK threshold rule
      (#DupThresh) in RACK.
      
      When the number of packets SACKed is greater or equal to the
      threshold, RACK sets the reordering window to zero which would
      immediately mark all the unsacked packets below the highest SACKed
      sequence lost. Since this approach is known to not work well with
      reordering, RACK only uses it if no reordering has been observed.
      
      The DUPACK threshold rule is a particularly useful extension to the
      fast recoveries triggered by RACK reordering timer. For example
      data-center transfers where the RTT is much smaller than a timer
      tick, or high RTT path where the default RTT/4 may take too long.
      
      Note that this patch differs slightly from RFC6675. RFC6675
      considers a packet lost when at least #DupThresh higher-sequence
      packets are SACKed.
      
      With RACK, for connections that have seen reordering, RACK
      continues to use a dynamically-adaptive time-based reordering
      window to detect losses. But for connections on which we have not
      yet seen reordering, this patch considers a packet lost when at
      least one higher sequence packet is SACKed and the total number
      of SACKed packets is at least DupThresh. For example, suppose a
      connection has not seen reordering, and sends 10 packets, and
      packets 3, 5, 7 are SACKed. RFC6675 considers packets 1 and 2
      lost. RACK considers packets 1, 2, 4, 6 lost.
      
      There is some small risk of spurious retransmits here due to
      reordering. However, this is mostly limited to the first flight of
      a connection on which the sender receives SACKs from reordering.
      And RFC 6675 and FACK loss detection have a similar risk on the
      first flight with reordering (it's just that the risk of spurious
      retransmits from reordering was slightly narrower for those older
      algorithms due to the margin of 3*MSS).
      
      Also the minimum reordering window is reduced from 1 msec to 0
      to recover quicker on short RTT transfers. Therefore RACK is more
      aggressive in marking packets lost during recovery to reduce the
      reordering window timeouts.
      Signed-off-by: default avatarYuchung Cheng <ycheng@google.com>
      Signed-off-by: default avatarNeal Cardwell <ncardwell@google.com>
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Reviewed-by: default avatarSoheil Hassas Yeganeh <soheil@google.com>
      Reviewed-by: default avatarPriyaranjan Jha <priyarjha@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      20b654df
    • Ivan Khoronzhuk's avatar
      net: ethernet: ti: cpsw: disable mq feature for "AM33xx ES1.0" devices · 9611d6d6
      Ivan Khoronzhuk authored
      The early versions of am33xx devices, related to ES1.0 SoC revision
      have errata limiting mq support. That's the same errata as
      commit 7da11600 ("drivers: net: cpsw: add am335x errata workarround for
      interrutps")
      
      AM33xx Errata [1] Advisory 1.0.9
      http://www.ti.com/lit/er/sprz360f/sprz360f.pdf
      
      After additional investigation were found that drivers w/a is
      propagated on all AM33xx SoCs and on DM814x. But the errata exists
      only for ES1.0 of AM33xx family, limiting mq support for revisions
      after ES1.0. So, disable mq support only for related SoCs and use
      separate polls for revisions allowing mq.
      Signed-off-by: default avatarIvan Khoronzhuk <ivan.khoronzhuk@linaro.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9611d6d6
    • David S. Miller's avatar
      Merge branch 'sched-refactor-NOLOCK-qdiscs' · 4b9c7768
      David S. Miller authored
      Paolo Abeni says:
      
      ====================
      sched: refactor NOLOCK qdiscs
      
      With the introduction of NOLOCK qdiscs, pfifo_fast performances in the
      uncontended scenario degraded measurably, especially after the commit
      eb82a994 ("net: sched, fix OOO packets with pfifo_fast").
      
      This series restore the pfifo_fast performances in such scenario back the
      previous level, mainly reducing the number of atomic operations required to
      perform the qdisc_run() call. Even performances in the contended scenario
      increase measurably.
      
      Note: This series is on top of:
      
      sched: manipulate __QDISC_STATE_RUNNING in qdisc_run_* helpers
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4b9c7768
    • Paolo Abeni's avatar
      pfifo_fast: drop unneeded additional lock on dequeue · 021a17ed
      Paolo Abeni authored
      After the previous patch, for NOLOCK qdiscs, q->seqlock is
      always held when the dequeue() is invoked, we can drop
      any additional locking to protect such operation.
      Signed-off-by: default avatarPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      021a17ed
    • Paolo Abeni's avatar
      sched: replace __QDISC_STATE_RUNNING bit with a spin lock · 96009c7d
      Paolo Abeni authored
      So that we can use lockdep on it.
      The newly introduced sequence lock has the same scope of busylock,
      so it shares the same lockdep annotation, but it's only used for
      NOLOCK qdiscs.
      
      With this changeset we acquire such lock in the control path around
      flushing operation (qdisc reset), to allow more NOLOCK qdisc perf
      improvement in the next patch.
      Signed-off-by: default avatarPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      96009c7d
    • David S. Miller's avatar
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next · b9f672af
      David S. Miller authored
      Daniel Borkmann says:
      
      ====================
      pull-request: bpf-next 2018-05-17
      
      The following pull-request contains BPF updates for your *net-next* tree.
      
      The main changes are:
      
      1) Provide a new BPF helper for doing a FIB and neighbor lookup
         in the kernel tables from an XDP or tc BPF program. The helper
         provides a fast-path for forwarding packets. The API supports
         IPv4, IPv6 and MPLS protocols, but currently IPv4 and IPv6 are
         implemented in this initial work, from David (Ahern).
      
      2) Just a tiny diff but huge feature enabled for nfp driver by
         extending the BPF offload beyond a pure host processing offload.
         Offloaded XDP programs are allowed to set the RX queue index and
         thus opening the door for defining a fully programmable RSS/n-tuple
         filter replacement. Once BPF decided on a queue already, the device
         data-path will skip the conventional RSS processing completely,
         from Jakub.
      
      3) The original sockmap implementation was array based similar to
         devmap. However unlike devmap where an ifindex has a 1:1 mapping
         into the map there are use cases with sockets that need to be
         referenced using longer keys. Hence, sockhash map is added reusing
         as much of the sockmap code as possible, from John.
      
      4) Introduce BTF ID. The ID is allocatd through an IDR similar as
         with BPF maps and progs. It also makes BTF accessible to user
         space via BPF_BTF_GET_FD_BY_ID and adds exposure of the BTF data
         through BPF_OBJ_GET_INFO_BY_FD, from Martin.
      
      5) Enable BPF stackmap with build_id also in NMI context. Due to the
         up_read() of current->mm->mmap_sem build_id cannot be parsed.
         This work defers the up_read() via a per-cpu irq_work so that
         at least limited support can be enabled, from Song.
      
      6) Various BPF JIT follow-up cleanups and fixups after the LD_ABS/LD_IND
         JIT conversion as well as implementation of an optimized 32/64 bit
         immediate load in the arm64 JIT that allows to reduce the number of
         emitted instructions; in case of tested real-world programs they
         were shrinking by three percent, from Daniel.
      
      7) Add ifindex parameter to the libbpf loader in order to enable
         BPF offload support. Right now only iproute2 can load offloaded
         BPF and this will also enable libbpf for direct integration into
         other applications, from David (Beckett).
      
      8) Convert the plain text documentation under Documentation/bpf/ into
         RST format since this is the appropriate standard the kernel is
         moving to for all documentation. Also add an overview README.rst,
         from Jesper.
      
      9) Add __printf verification attribute to the bpf_verifier_vlog()
         helper. Though it uses va_list we can still allow gcc to check
         the format string, from Mathieu.
      
      10) Fix a bash reference in the BPF selftest's Makefile. The '|& ...'
          is a bash 4.0+ feature which is not guaranteed to be available
          when calling out to shell, therefore use a more portable variant,
          from Joe.
      
      11) Fix a 64 bit division in xdp_umem_reg() by using div_u64()
          instead of relying on the gcc built-in, from Björn.
      
      12) Fix a sock hashmap kmalloc warning reported by syzbot when an
          overly large key size is used in hashmap then causing overflows
          in htab->elem_size. Reject bogus attr->key_size early in the
          sock_hash_alloc(), from Yonghong.
      
      13) Ensure in BPF selftests when urandom_read is being linked that
          --build-id is always enabled so that test_stacktrace_build_id[_nmi]
          won't be failing, from Alexei.
      
      14) Add bitsperlong.h as well as errno.h uapi headers into the tools
          header infrastructure which point to one of the arch specific
          uapi headers. This was needed in order to fix a build error on
          some systems for the BPF selftests, from Sirio.
      
      15) Allow for short options to be used in the xdp_monitor BPF sample
          code. And also a bpf.h tools uapi header sync in order to fix a
          selftest build failure. Both from Prashant.
      
      16) More formally clarify the meaning of ID in the direct packet access
          section of the BPF documentation, from Wang.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b9f672af
  2. 16 May, 2018 23 commits