1. 03 Dec, 2017 8 commits
  2. 01 Dec, 2017 26 commits
  3. 30 Nov, 2017 6 commits
    • David S. Miller's avatar
      Merge branch 'macb-rx-packet-filtering' · 201c78e0
      David S. Miller authored
      Rafal Ozieblo says:
      
      ====================
      Receive packets filtering for macb driver
      
      This patch series adds support for receive packets
      filtering for Cadence GEM driver. Packets can be redirect
      to different hardware queues based on source IP, destination IP,
      source port or destination port. To enable filtering,
      support for RX queueing was added as well.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      201c78e0
    • Rafal Ozieblo's avatar
      net: macb: Added support for RX filtering · ae8223de
      Rafal Ozieblo authored
      This patch allows filtering received packets to different
      hardware queues (aka ntuple).
      Signed-off-by: default avatarRafal Ozieblo <rafalo@cadence.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ae8223de
    • Rafal Ozieblo's avatar
      net: macb: Added some queue statistics · 512286bb
      Rafal Ozieblo authored
      Added statistics per queue:
      - qX_rx_packets
      - qX_rx_bytes
      - qX_rx_dropped
      - qX_tx_packets
      - qX_tx_bytes
      - qX_tx_dropped
      Signed-off-by: default avatarRafal Ozieblo <rafalo@cadence.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      512286bb
    • Rafal Ozieblo's avatar
      net: macb: Added support for many RX queues · ae1f2a56
      Rafal Ozieblo authored
      To be able for packet reception on different RX queues some
      configuration has to be performed. This patch checks how many
      hardware queue does GEM support and initializes them.
      Signed-off-by: default avatarRafal Ozieblo <rafalo@cadence.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ae1f2a56
    • Shrikrishna Khare's avatar
      vmxnet3: increase default rx ring sizes · 7475908f
      Shrikrishna Khare authored
      There are several reasons for increasing the receive ring sizes:
      
      1. The original ring size of 256 was chosen about 10 years ago when
      vmxnet3 was first created. At that time, 10Gbps Ethernet was not prevalent
      and servers were dominated by 1Gbps Ethernet. Now 10Gbps is common place,
      and higher bandwidth links -- 25Gbps, 40Gbps, 50Gbps -- are starting
      to appear. 256 Rx ring entries are simply not enough to keep up with
      higher link speed when there is a burst of network frames coming from
      these high speed links. Even with full MTU size frames, they are gone
      in a short time. It is also more common to have a mix of frame sizes,
      and more likely bi-modal distribution of frame sizes so the average frame
      size is not close to full MTU. If we consider average frame size of 800B,
      1024 frames that come in a burst takes ~0.65 ms to arrive at 10Gbps. With
      256 entires, it takes ~0.16 ms to arrive at 10Gbps.  At 25Gbps or 40Gbps,
      this time is reduced accordingly.
      
      2. On a hypervisor where there are many VMs and CPU is over committed,
      i.e. the number of VCPUs is more than the number of VCPUs, each PCPU is
      in effect time shared between multiple VMs/VCPUs. The time granularity at
      which this multiplexing occurs is typically coarser than between processes
      on a guest OS. Trying to time slice more finely is not efficient, for
      example, if memory cache is barely warmed up when switching from one VM
      to another occurs. This CPU overcommit adds delay to when the driver
      in a VM can service incoming packets. Whether CPU is over committed
      really depends on customer workloads. For certain situations, it is very
      common. For example, workloads of desktop VMs and product testing setups.
      Consolidation and sharing is what drives efficiency of a customer setup
      for such workloads. In these situations, the raw network bandwidth may
      not be very high, but the delays between when a VM is running or not
      running can also be relatively long.
      Signed-off-by: default avatarShrikrishna Khare <skhare@vmware.com>
      Acked-by: default avatarJin Heo <heoj@vmware.com>
      Acked-by: default avatarGuolin Yang <gyang@vmware.com>
      Acked-by: default avatarBoon Ang <bang@vmware.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      7475908f
    • Florian Fainelli's avatar
      net: dsa: bcm_sf2: Utilize b53_get_tag_protocol() · 9f66816a
      Florian Fainelli authored
      Utilize the much more capable b53_get_tag_protocol() which takes care of
      all Broadcom switches specifics to resolve which port can have Broadcom
      tags enabled or not.
      Signed-off-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9f66816a