1. 14 Jun, 2019 26 commits
  2. 13 Jun, 2019 6 commits
  3. 12 Jun, 2019 8 commits
    • Eric Dumazet's avatar
      tcp: add optional per socket transmit delay · a842fe14
      Eric Dumazet authored
      Adding delays to TCP flows is crucial for studying behavior
      of TCP stacks, including congestion control modules.
      
      Linux offers netem module, but it has unpractical constraints :
      - Need root access to change qdisc
      - Hard to setup on egress if combined with non trivial qdisc like FQ
      - Single delay for all flows.
      
      EDT (Earliest Departure Time) adoption in TCP stack allows us
      to enable a per socket delay at a very small cost.
      
      Networking tools can now establish thousands of flows, each of them
      with a different delay, simulating real world conditions.
      
      This requires FQ packet scheduler or a EDT-enabled NIC.
      
      This patchs adds TCP_TX_DELAY socket option, to set a delay in
      usec units.
      
        unsigned int tx_delay = 10000; /* 10 msec */
      
        setsockopt(fd, SOL_TCP, TCP_TX_DELAY, &tx_delay, sizeof(tx_delay));
      
      Note that FQ packet scheduler limits might need some tweaking :
      
      man tc-fq
      
      PARAMETERS
         limit
             Hard  limit  on  the  real  queue  size. When this limit is
             reached, new packets are dropped. If the value is  lowered,
             packets  are  dropped so that the new limit is met. Default
             is 10000 packets.
      
         flow_limit
             Hard limit on the maximum  number  of  packets  queued  per
             flow.  Default value is 100.
      
      Use of TCP_TX_DELAY option will increase number of skbs in FQ qdisc,
      so packets would be dropped if any of the previous limit is hit.
      
      Use of a jump label makes this support runtime-free, for hosts
      never using the option.
      
      Also note that TSQ (TCP Small Queues) limits are slightly changed
      with this patch : we need to account that skbs artificially delayed
      wont stop us providind more skbs to feed the pipe (netem uses
      skb_orphan_partial() for this purpose, but FQ can not use this trick)
      
      Because of that, using big delays might very well trigger
      old bugs in TSO auto defer logic and/or sndbuf limited detection.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a842fe14
    • David S. Miller's avatar
      Merge branch 'ena-dynamic-queue-sizes' · e0ffbd37
      David S. Miller authored
      Sameeh Jubran says:
      
      ====================
      Support for dynamic queue size changes
      
      This patchset introduces the following:
      * add new admin command for supporting different queue size for Tx/Rx
      * add support for Tx/Rx queues size modification through ethtool
      * allow queues allocation backoff when low on memory
      * update driver version
      
      Difference from v2:
      * Dropped superfluous range checks which are already done in ethtool. [patch 5/7]
      * Dropped inline keyword from function. [patch 4/7]
      * Added a new patch which drops inline keyword all *.c files. [patch 6/7]
      
      Difference from v1:
      * Changed ena_update_queue_sizes() signature to use u32 instead of int
        type for the size arguments. [patch 5/7]
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e0ffbd37
    • Sameeh Jubran's avatar
      net: ena: update driver version from 2.0.3 to 2.1.0 · dbbc6e68
      Sameeh Jubran authored
      Update driver version to match device specification.
      Signed-off-by: default avatarSameeh Jubran <sameehj@amazon.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      dbbc6e68
    • Sameeh Jubran's avatar
      net: ena: remove inline keyword from functions in *.c · c2b54204
      Sameeh Jubran authored
      Let the compiler decide if the function should be inline in *.c files
      Signed-off-by: default avatarSameeh Jubran <sameehj@amazon.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c2b54204
    • Sameeh Jubran's avatar
      net: ena: add ethtool function for changing io queue sizes · eece4d2a
      Sameeh Jubran authored
      Implement the set_ringparam() function of the ethtool interface
      to enable the changing of io queue sizes.
      Signed-off-by: default avatarArthur Kiyanovski <akiyano@amazon.com>
      Signed-off-by: default avatarSameeh Jubran <sameehj@amazon.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      eece4d2a
    • Sameeh Jubran's avatar
      net: ena: allow queue allocation backoff when low on memory · 13ca32a6
      Sameeh Jubran authored
      If there is not enough memory to allocate io queues the driver will
      try to allocate smaller queues.
      
      The backoff algorithm is as follows:
      
      1. Try to allocate TX and RX and if successful.
      1.1. return success
      
      2. Divide by 2 the size of the larger of RX and TX queues (or both if their size is the same).
      
      3. If TX or RX is smaller than 256
      3.1. return failure.
      4. else
      4.1. go back to 1.
      
      Also change the tx_queue_size, rx_queue_size field names in struct
      adapter to requested_tx_queue_size and requested_rx_queue_size, and
      use RX and TX queue 0 for actual queue sizes.
      Explanation:
      The original fields were useless as they were simply used to assign
      values once from them to each of the queues in the adapter in ena_probe().
      They could simply be deleted. However now that we have a backoff
      feature, we have use for them. In case of backoff there is a difference
      between the requested queue sizes and the actual sizes. Therefore there
      is a need to save the requested queue size for future retries of queue
      allocation (for example if allocation failed and then ifdown + ifup was
      called we want to start the allocation from the original requested size of
      the queues).
      Signed-off-by: default avatarArthur Kiyanovski <akiyano@amazon.com>
      Signed-off-by: default avatarSameeh Jubran <sameehj@amazon.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      13ca32a6
    • Sameeh Jubran's avatar
      net: ena: make ethtool show correct current and max queue sizes · 9f9ae3f9
      Sameeh Jubran authored
      Currently ethtool -g shows the same size for current and max queue
      sizes.
      Signed-off-by: default avatarArthur Kiyanovski <akiyano@amazon.com>
      Signed-off-by: default avatarSameeh Jubran <sameehj@amazon.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9f9ae3f9
    • Sameeh Jubran's avatar
      net: ena: enable negotiating larger Rx ring size · 31aa9857
      Sameeh Jubran authored
      Use MAX_QUEUES_EXT get feature capability to query the device.
      Signed-off-by: default avatarNetanel Belgazal <netanel@amazon.com>
      Signed-off-by: default avatarSameeh Jubran <sameehj@amazon.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      31aa9857