1. 22 Feb, 2019 24 commits
  2. 21 Feb, 2019 16 commits
    • David S. Miller's avatar
      Merge branch 'mlxsw-Support-for-shared-buffers-in-Spectrum-2' · 2fb44dd0
      David S. Miller authored
      Ido Schimmel says:
      
      ====================
      mlxsw: Support for shared buffers in Spectrum-2
      
      Petr says:
      
      Spectrum-2 will be configured with a different set of pools than
      Spectrum-1, their sizes will be larger, and the individual quotas will
      be different as well. It is therefore necessary to make the shared
      buffer module aware of this dependence on chip type, and adjust the
      individual tables.
      
      In patch #1, introduce a structure for keeping per-chip immutable and
      default values.
      
      In patch #2, structures for keeping current values of SBPM and SBPR
      (pool configuration and port-pool quota) are allocated dynamically to
      support varying pool counts.
      
      In patches #3 to #7, uses of individual shared buffer configuration
      tables are migrated from global definitions to fields in struct
      mlxsw_sp_sb_vals, which was introduced above.
      
      Up until this point, the actual configuration is still the one suitable
      for Spectrum-1. In patch #8 Spectrum-2 configuration is added.
      
      In patch #9, port headroom configuration is changed to take into account
      current recommended value for a 100-Gbps port, and the split factor.
      
      In patch #10, requests for overlarge headroom are rejected. This avoids
      potential chip freeze should such overlarge requests be made.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      2fb44dd0
    • Petr Machata's avatar
      mlxsw: spectrum_buffers: Reject overlarge headroom size requests · bb6c346c
      Petr Machata authored
      cap_max_headroom_size holds maximum headroom size supported.
      Overstepping that limit might under certain conditions lead to ASIC
      freeze.
      
      Query and store the value, and add mlxsw_sp_sb_max_headroom_cells() for
      obtaining the stored value. In __mlxsw_sp_port_headroom_set(), reject
      requests where the total port buffer is larger than the advertised
      maximum.
      Signed-off-by: default avatarPetr Machata <petrm@mellanox.com>
      Signed-off-by: default avatarIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      bb6c346c
    • Petr Machata's avatar
      mlxsw: spectrum_buffers: Update port headroom configuration · edf777f5
      Petr Machata authored
      The recommendation for headroom size for 100Gbps port and 100m cable is
      101.6KB, reduced accordingly for split ports. The closest higher number
      evenly divisible by cell size for both Spectrum-1 and Spectrum-2, and
      such that the number of cells can be further divided by maximum split
      factor of 4, is 102528 bytes, or 25632 bytes per lane.
      
      Update mlxsw_sp_port_pb_init() to compute the headroom taking into
      account this recommended per-lane value and number of lanes actually
      dedicated to a given port.
      Signed-off-by: default avatarPetr Machata <petrm@mellanox.com>
      Signed-off-by: default avatarIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      edf777f5
    • Petr Machata's avatar
      mlxsw: spectrum_buffers: Add Spectrum-2 shared buffer configuration · fe099bf6
      Petr Machata authored
      Customize the tables related to shared buffer configuration to match the
      current recommendation for Spectrum-2 systems.
      Signed-off-by: default avatarPetr Machata <petrm@mellanox.com>
      Signed-off-by: default avatarIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      fe099bf6
    • Petr Machata's avatar
      mlxsw: spectrum_buffers: Keep mlxsw_sp_sb_mm in sb_vals · 13f35cc4
      Petr Machata authored
      The SBMM register configures the shared buffer quota for MC packets
      according to Switch-Priority. The default configuration depends on the
      chip type. Therefore keep the table and length in struct
      mlxsw_sp_sb_vals. Redirect the references from the global definitions to
      the fields.
      Signed-off-by: default avatarPetr Machata <petrm@mellanox.com>
      Signed-off-by: default avatarIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      13f35cc4
    • Petr Machata's avatar
      mlxsw: spectrum_buffers: Keep mlxsw_sp_sb_cm in sb_vals · bb60a62e
      Petr Machata authored
      The SBCM register configures shared buffer quota according to
      port-priority resp. port-TC. The default configuration depends on the
      chip type. Therefore keep the tables and their lengths in struct
      mlxsw_sp_sb_vals. Redirect the references from the global definitions to
      the fields.
      Signed-off-by: default avatarPetr Machata <petrm@mellanox.com>
      Signed-off-by: default avatarIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      bb60a62e
    • Petr Machata's avatar
      mlxsw: spectrum_buffers: Keep mlxsw_sp_sb_prs in mlxsw_sp_sb_vals · 5d25232e
      Petr Machata authored
      The SBPR register configures shared buffer pools. The default
      configuration depends on the chip type. Therefore keep it in struct
      mlxsw_sp_sb_vals. Redirect the one reference from the global array to
      the field.
      
      Because the pool descriptor ID is implicit in the ordering of array
      members, both this array and the pool descriptor array have the same
      length. Therefore reuse mlxsw_sp_sb.pool_dess_len for the purpose of
      determining the length of SBPR array.
      
      Drop the now useless MLXSW_SP_SB_PRS_LEN.
      Signed-off-by: default avatarPetr Machata <petrm@mellanox.com>
      Signed-off-by: default avatarIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      5d25232e
    • Petr Machata's avatar
      mlxsw: spectrum_buffers: Keep mlxsw_sp_sb_pms in mlxsw_sp_sb_vals · cc1ce6ff
      Petr Machata authored
      The SBPM register can be used to configure quotas for packets ingressing
      from a certain pool to a certain port, and egressing from a certain pool
      to a certain port. The default configuration depends on the chip type.
      Therefore keep it in struct mlxsw_sp_sb_vals. Redirect the one reference
      from the global array to the field.
      
      Because the pool descriptor ID is implicit in the ordering of array
      members, both this array and the pool descriptor array have the same
      length. Therefore reuse mlxsw_sp_sb.pool_dess_len for the purpose of
      determining the length of SBPM array.
      
      Drop the now useless MLXSW_SP_SB_PMS_LEN.
      Signed-off-by: default avatarPetr Machata <petrm@mellanox.com>
      Signed-off-by: default avatarIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      cc1ce6ff
    • Petr Machata's avatar
      mlxsw: spectrum_buffers: Keep pool descriptors in mlxsw_sp_sb_vals · 5d65f5f4
      Petr Machata authored
      Keep the table of pool descriptors and its length in struct
      mlxsw_sp_sb_vals so that it can be specialized per chip type. Redirect
      all users from the global definitions to the mlxsw_sp_sb fields.
      
      Give mlxsw_sp_pool_count() an extra mlxsw_sp parameter so that it can
      access the descriptor table.
      
      Drop the now unnecessary MLXSW_SP_SB_POOL_DESS_LEN.
      Signed-off-by: default avatarPetr Machata <petrm@mellanox.com>
      Signed-off-by: default avatarIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      5d65f5f4
    • Petr Machata's avatar
      mlxsw: spectrum_buffers: Allocate prs & pms dynamically · 93d201f7
      Petr Machata authored
      Spectrum-2 will be configured with a different set of pools than
      Spectrum-1. The size of prs and pms buffers will therefore depend on the
      chip type of the device.
      
      Therefore, instead of reserving an array directly in a structure
      definition, allocate the buffer in mlxsw_sp_sb_port{,s}_init().
      Signed-off-by: default avatarPetr Machata <petrm@mellanox.com>
      Signed-off-by: default avatarIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      93d201f7
    • Petr Machata's avatar
      mlxsw: spectrum: Add struct mlxsw_sp_sb_vals · c39f3e0e
      Petr Machata authored
      Spectrum-2 will be configured with a different shared buffer
      configuration than Spectrum-1. Therefore introduce a structure for
      keeping the chip-specific default and immutable configuration.
      
      Configuration mutable in runtime will still be kept in struct
      mlxsw_sp_sb.
      Signed-off-by: default avatarPetr Machata <petrm@mellanox.com>
      Signed-off-by: default avatarIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c39f3e0e
    • David S. Miller's avatar
      Merge branch 'net-stmmac-Performance-improvements-in-Multi-Queue' · fdb89a31
      David S. Miller authored
      Jose Abreu says:
      
      ====================
      net: stmmac: Performance improvements in Multi-Queue
      
      Tested in XGMAC2 and GMAC5.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      fdb89a31
    • Jose Abreu's avatar
      net: stmmac: dwxgmac2: Also use TBU interrupt to clean TX path · ae9f346d
      Jose Abreu authored
      TBU interrupt is a normal interrupt and can be used to trigger the
      cleaning of TX path. Lets check if it's active in DMA interrupt handler.
      
      While at it, refactor a little bit the function:
      	- Don't check if RI is enabled because at function exit we will
      	  only clear the interrupts that are enabled so, no event will
      	  be missed.
      
      In my tests withe XGMAC2 this increased performance.
      Signed-off-by: default avatarJose Abreu <joabreu@synopsys.com>
      Cc: Joao Pinto <jpinto@synopsys.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com>
      Cc: Alexandre Torgue <alexandre.torgue@st.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ae9f346d
    • Jose Abreu's avatar
      net: stmmac: dwmac4: Also use TBU interrupt to clean TX path · 1103d3a5
      Jose Abreu authored
      TBU interrupt is a normal interrupt and can be used to trigger the
      cleaning of TX path. Lets check if it's active in DMA interrupt handler.
      
      While at it, refactor a little bit the function:
      	- Don't check if RI is enabled because at function exit we will
      	  only clear the interrupts that are enabled so, no event will be
      	  missed.
      
      In my tests with GMAC5 this increased performance.
      Signed-off-by: default avatarJose Abreu <joabreu@synopsys.com>
      Cc: Joao Pinto <jpinto@synopsys.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com>
      Cc: Alexandre Torgue <alexandre.torgue@st.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      1103d3a5
    • Jose Abreu's avatar
      net: stmmac: Fix NAPI poll in TX path when in multi-queue · 4ccb4585
      Jose Abreu authored
      Commit 8fce3331 introduced the concept of NAPI per-channel and
      independent cleaning of TX path.
      
      This is currently breaking performance in some cases. The scenario
      happens when all packets are being received in Queue 0 but the TX is
      performed in Queue != 0.
      
      Fix this by using different NAPI instances per each TX and RX queue, as
      suggested by Florian.
      
      Changes from v2:
      	- Only force restart transmission if there are pending packets
      Changes from v1:
      	- Pass entire ring size to TX clean path (Florian)
      Signed-off-by: default avatarJose Abreu <joabreu@synopsys.com>
      Cc: Florian Fainelli <f.fainelli@gmail.com>
      Cc: Joao Pinto <jpinto@synopsys.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com>
      Cc: Alexandre Torgue <alexandre.torgue@st.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4ccb4585
    • David S. Miller's avatar
      Merge branch 'net-Get-rid-of-switchdev_port_attr_get' · d0e698d5
      David S. Miller authored
      Florian Fainelli says:
      
      ====================
      net: Get rid of switchdev_port_attr_get()
      
      This patch series splits the removal of the switchdev_ops that was
      proposed a few times before and first tackles the easy part which is the
      removal of the single call to switchdev_port_attr_get() within the
      bridge code.
      
      As suggestd by Ido, this patch series adds a
      SWITCHDEV_ATTR_ID_PORT_PRE_BRIDGE_FLAGS which is used in the same
      context as the caller of switchdev_port_attr_set(), so not deferred, and
      then the operation is carried out in deferred context with setting a
      support bridge port flag.
      
      Follow-up patches will do the switchdev_ops removal after introducing
      the proper helpers for the switchdev blocking notifier to work across
      stacked devices (unlike the previous submissions).
      
      David this does depend on Russell's "[PATCH net-next v5 0/3] net: dsa:
      mv88e6xxx: fix IPv6".
      
      Changes in v3:
      
      - rebased against net-next/master after Russell's IPv6 changes to DSA
      - ignore prepare/commit phase for PRE_BRIDGE_FLAGS since we don't
        want to trigger the WARN() in net/switchdev/switchdev.c in the commit
        phase
      
      Changes in v2:
      
      - differentiate callers not supporting switchdev_port_attr_set() from
        the driver not being able to support specific bridge flags
      
      - pass "mask" instead of "flags" for the PRE_BRIDGE_FLAGS check
      
      - skip prepare phase for PRE_BRIDGE_FLAGS
      
      - corrected documentation a bit more
      
      - tested bridge_vlan_aware.sh with veth/VRF
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d0e698d5