1. 21 Nov, 2023 5 commits
    • Radhey Shyam Pandey's avatar
      net: axienet: Introduce dmaengine support · 6a91b846
      Radhey Shyam Pandey authored
      Add dmaengine framework to communicate with the xilinx DMAengine
      driver(AXIDMA).
      
      Axi ethernet driver uses separate channels for transmit and receive.
      Add support for these channels to handle TX and RX with skb and
      appropriate callbacks. Also add axi ethernet core interrupt for
      dmaengine framework support.
      
      The dmaengine framework was extended for metadata API support.
      However it still needs further enhancements to make it well suited for
      ethernet usecases. The ethernet features i.e ethtool set/get of DMA IP
      properties, ndo_poll_controller,(mentioned in TODO) are not supported
      and it requires follow-up discussions.
      
      dmaengine support has a dependency on xilinx_dma as it uses
      xilinx_vdma_channel_set_config() API to reset the DMA IP
      which internally reset MAC prior to accessing MDIO.
      
      Benchmark with netperf:
      
      xilinx-zcu102-20232:~$ netperf -H 192.168.10.20 -t TCP_STREAM
      MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET
      to 192.168.10.20 () port 0 AF_INET
      Recv   Send    Send
      Socket Socket  Message  Elapsed
      Size   Size    Size     Time     Throughput
      bytes  bytes   bytes    secs.    10^6bits/sec
      
      131072  16384  16384    10.02     886.69
      
      xilinx-zcu102-20232:~$ netperf -H 192.168.10.20 -t UDP_STREAM
      MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET
      to 192.168.10.20 () port 0 AF_INET
      Socket  Message  Elapsed      Messages
      Size    Size     Time         Okay Errors   Throughput
      bytes   bytes    secs            #      #   10^6bits/sec
      
      212992   65507   10.00       15851      0     830.66
      212992           10.00       15851            830.66
      Signed-off-by: default avatarRadhey Shyam Pandey <radhey.shyam.pandey@amd.com>
      Link: https://lore.kernel.org/r/1700074613-1977070-4-git-send-email-radhey.shyam.pandey@amd.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      6a91b846
    • Sarath Babu Naidu Gaddam's avatar
      net: axienet: Preparatory changes for dmaengine support · 6b1b40f7
      Sarath Babu Naidu Gaddam authored
      The axiethernet driver has inbuilt dma programming. In order to add
      dmaengine support and make it's integration seamless the current axidma
      inbuilt programming code is put under use_dmaengine check.
      
      It also performs minor code reordering to minimize conditional
      use_dmaengine checks and there is no functional change. It uses
      "dmas" property to identify whether it should use a dmaengine
      framework or inbuilt axidma programming.
      Signed-off-by: default avatarSarath Babu Naidu Gaddam <sarath.babu.naidu.gaddam@amd.com>
      Signed-off-by: default avatarRadhey Shyam Pandey <radhey.shyam.pandey@amd.com>
      Link: https://lore.kernel.org/r/1700074613-1977070-3-git-send-email-radhey.shyam.pandey@amd.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      6b1b40f7
    • Radhey Shyam Pandey's avatar
      dt-bindings: net: xlnx,axi-ethernet: Introduce DMA support · 5e63c5ef
      Radhey Shyam Pandey authored
      Xilinx 1G/2.5G Ethernet Subsystem provides 32-bit AXI4-Stream buses to
      move transmit and receive Ethernet data to and from the subsystem.
      
      These buses are designed to be used with an AXI Direct Memory Access(DMA)
      IP or AXI Multichannel Direct Memory Access (MCDMA) IP core, AXI4-Stream
      Data FIFO, or any other custom logic in any supported device.
      
      Primary high-speed DMA data movement between system memory and stream
      target is through the AXI4 Read Master to AXI4 memory-mapped to stream
      (MM2S) Master, and AXI stream to memory-mapped (S2MM) Slave to AXI4
      Write Master. AXI DMA/MCDMA enables channel of data movement on both
      MM2S and S2MM paths in scatter/gather mode.
      
      AXI DMA has two channels where as MCDMA has 16 Tx and 16 Rx channels.
      To uniquely identify each channel use 'chan' suffix. Depending on the
      usecase AXI ethernet driver can request any combination of multichannel
      DMA channels using generic dmas, dma-names properties.
      
      Example:
      dma-names = tx_chan0, rx_chan0, tx_chan1, rx_chan1;
      Signed-off-by: default avatarRadhey Shyam Pandey <radhey.shyam.pandey@amd.com>
      Reviewed-by: default avatarKrzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
      Link: https://lore.kernel.org/r/1700074613-1977070-2-git-send-email-radhey.shyam.pandey@amd.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      5e63c5ef
    • Willem de Bruijn's avatar
      selftests: net: verify fq per-band packet limit · a0bc96c0
      Willem de Bruijn authored
      Commit 29f834aa ("net_sched: sch_fq: add 3 bands and WRR
      scheduling") introduces multiple traffic bands, and per-band maximum
      packet count.
      
      Per-band limits ensures that packets in one class cannot fill the
      entire qdisc and so cause DoS to the traffic in the other classes.
      
      Verify this behavior:
        1. set the limit to 10 per band
        2. send 20 pkts on band A: verify that 10 are queued, 10 dropped
        3. send 20 pkts on band A: verify that  0 are queued, 20 dropped
        4. send 20 pkts on band B: verify that 10 are queued, 10 dropped
      
      Packets must remain queued for a period to trigger this behavior.
      Use SO_TXTIME to store packets for 100 msec.
      
      The test reuses existing upstream test infra. The script is a fork of
      cmsg_time.sh. The scripts call cmsg_sender.
      
      The test extends cmsg_sender with two arguments:
      
      * '-P' SO_PRIORITY
        There is a subtle difference between IPv4 and IPv6 stack behavior:
        PF_INET/IP_TOS        sets IP header bits and sk_priority
        PF_INET6/IPV6_TCLASS  sets IP header bits BUT NOT sk_priority
      
      * '-n' num pkts
        Send multiple packets in quick succession.
        I first attempted a for loop in the script, but this is too slow in
        virtualized environments, causing flakiness as the 100ms timeout is
        reached and packets are dequeued.
      
      Also do not wait for timestamps to be queued unless timestamps are
      requested.
      Signed-off-by: default avatarWillem de Bruijn <willemb@google.com>
      Reviewed-by: default avatarSimon Horman <horms@kernel.org>
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Link: https://lore.kernel.org/r/20231116203449.2627525-1-willemdebruijn.kernel@gmail.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      a0bc96c0
    • Vishvambar Panth S's avatar
      net: microchip: lan743x : bidirectional throughput improvement · 45933b2d
      Vishvambar Panth S authored
      The LAN743x/PCI11xxx DMA descriptors are always 4 dwords long, but the
      device supports placing the descriptors in memory back to back or
      reserving space in between them using its DMA_DESCRIPTOR_SPACE (DSPACE)
      configurable hardware setting. Currently DSPACE is unnecessarily set to
      match the host's L1 cache line size, resulting in space reserved in
      between descriptors in most platforms and causing a suboptimal behavior
      (single PCIe Mem transaction per descriptor). By changing the setting
      to DSPACE=16 many descriptors can be packed in a single PCIe Mem
      transaction resulting in a massive performance improvement in
      bidirectional tests without any negative effects.
      Tested and verified improvements on x64 PC and several ARM platforms
      (typical data below)
      
      Test setup 1: x64 PC with LAN7430 ---> x64 PC
      
      iperf3 UDP bidirectional with DSPACE set to L1 CACHE Size:
      - - - - - - - - - - - - - - - - - - - - - - - - -
      [ ID][Role] Interval           Transfer     Bitrate
      [  5][TX-C]   0.00-10.00  sec   170 MBytes   143 Mbits/sec  sender
      [  5][TX-C]   0.00-10.04  sec   169 MBytes   141 Mbits/sec  receiver
      [  7][RX-C]   0.00-10.00  sec  1.02 GBytes   876 Mbits/sec  sender
      [  7][RX-C]   0.00-10.04  sec  1.02 GBytes   870 Mbits/sec  receiver
      
      iperf3 UDP bidirectional with DSPACE set to 16 Bytes
      - - - - - - - - - - - - - - - - - - - - - - - - -
      [ ID][Role] Interval           Transfer     Bitrate
      [  5][TX-C]   0.00-10.00  sec  1.11 GBytes   956 Mbits/sec  sender
      [  5][TX-C]   0.00-10.04  sec  1.11 GBytes   951 Mbits/sec  receiver
      [  7][RX-C]   0.00-10.00  sec  1.10 GBytes   948 Mbits/sec  sender
      [  7][RX-C]   0.00-10.04  sec  1.10 GBytes   942 Mbits/sec  receiver
      
      Test setup 2 : RK3399 with LAN7430 ---> x64 PC
      
      RK3399 Spec:
      The SOM-RK3399 is ARM module designed and developed by FriendlyElec.
      Cores: 64-bit Dual Core Cortex-A72 + Quad Core Cortex-A53
      Frequency: Cortex-A72(up to 2.0GHz), Cortex-A53(up to 1.5GHz)
      PCIe: PCIe x4, compatible with PCIe 2.1, Dual operation mode
      
      iperf3 UDP bidirectional with DSPACE set to L1 CACHE Size:
      - - - - - - - - - - - - - - - - - - - - - - - - -
      [ ID][Role] Interval           Transfer     Bitrate
      [  5][TX-C]   0.00-10.00  sec   534 MBytes   448 Mbits/sec  sender
      [  5][TX-C]   0.00-10.05  sec   534 MBytes   446 Mbits/sec  receiver
      [  7][RX-C]   0.00-10.00  sec  1.12 GBytes   961 Mbits/sec  sender
      [  7][RX-C]   0.00-10.05  sec  1.11 GBytes   946 Mbits/sec  receiver
      
      iperf3 UDP bidirectional with DSPACE set to 16 Bytes
      - - - - - - - - - - - - - - - - - - - - - - - - -
      [ ID][Role] Interval           Transfer     Bitrate
      [  5][TX-C]   0.00-10.00  sec   966 MBytes   810 Mbits/sec   sender
      [  5][TX-C]   0.00-10.04  sec   965 MBytes   806 Mbits/sec   receiver
      [  7][RX-C]   0.00-10.00  sec  1.11 GBytes   956 Mbits/sec   sender
      [  7][RX-C]   0.00-10.04  sec  1.07 GBytes   919 Mbits/sec   receiver
      Signed-off-by: default avatarVishvambar Panth S <vishvambarpanth.s@microchip.com>
      Reviewed-by: default avatarSimon Horman <horms@kernel.org>
      Reviewed-by: default avatarFlorian Fainelli <florian.fainelli@broadcom.com>
      Reviewed-by: default avatarJacob Keller <jacob.e.keller@intel.com>
      Link: https://lore.kernel.org/r/20231116054350.620420-1-vishvambarpanth.s@microchip.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      45933b2d
  2. 19 Nov, 2023 13 commits
  3. 18 Nov, 2023 22 commits