1. 04 Sep, 2019 10 commits
    • Marc Kleine-Budde's avatar
      can: af_can: give variables holding CAN statistics a sensible name · e2c1f5c7
      Marc Kleine-Budde authored
      This patch rename the variables holding the CAN statistics (can_stats
      and can_pstats) to pkg_stats and rcv_lists_stats which reflect better
      their meaning.
      
      The conversion is done with:
      
      	sed -i \
      		-e "s/can_stats\([^_]\)/pkg_stats\1/g" \
      		-e "s/can_pstats/rcv_lists_stats/g" \
      		net/can/af_can.c
      Signed-off-by: default avatarOleksij Rempel <o.rempel@pengutronix.de>
      Acked-by: default avatarOliver Hartkopp <socketcan@hartkopp.net>
      Signed-off-by: default avatarMarc Kleine-Budde <mkl@pengutronix.de>
      e2c1f5c7
    • Marc Kleine-Budde's avatar
      can: netns: give members of struct netns_can holding the statistics a sensible name · 2341086d
      Marc Kleine-Budde authored
      This patch gives the members of the struct netns_can that are holding
      the statistics a sensible name, by renaming struct netns_can::can_stats
      into struct netns_can::pkg_stats and struct netns_can::can_pstats into
      struct netns_can::rcv_lists_stats.
      
      The conversion is done with:
      
      	sed -i \
      		-e "s:\(struct[^*]*\*\)can_stats;.*:\1pkg_stats;:" \
      		-e "s:\(struct[^*]*\*\)can_pstats;.*:\1rcv_lists_stats;:" \
      		-e "s/can\.can_stats/can.pkg_stats/g" \
      		-e "s/can\.can_pstats/can.rcv_lists_stats/g" \
      		net/can/*.[ch] \
      		include/net/netns/can.h
      Signed-off-by: default avatarOleksij Rempel <o.rempel@pengutronix.de>
      Acked-by: default avatarOliver Hartkopp <socketcan@hartkopp.net>
      Signed-off-by: default avatarMarc Kleine-Budde <mkl@pengutronix.de>
      2341086d
    • Marc Kleine-Budde's avatar
      can: netns: give structs holding the CAN statistics a sensible name · 6c43bb3a
      Marc Kleine-Budde authored
      This patch renames both "struct s_stats" and "struct s_pstats", to
      "struct can_pkg_stats" and "struct can_rcv_lists_stats" to better
      reflect their meaning and improve code readability.
      
      The conversion is done with:
      
      	sed -i \
      		-e "s/struct s_stats/struct can_pkg_stats/g" \
      		-e "s/struct s_pstats/struct can_rcv_lists_stats/g" \
      		net/can/*.[ch] \
      		include/net/netns/can.h
      Signed-off-by: default avatarOleksij Rempel <o.rempel@pengutronix.de>
      Acked-by: default avatarOliver Hartkopp <socketcan@hartkopp.net>
      Signed-off-by: default avatarMarc Kleine-Budde <mkl@pengutronix.de>
      6c43bb3a
    • David S. Miller's avatar
      Merge branch '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue · 2c1f9e26
      David S. Miller authored
      Jeff Kirsher says:
      
      ====================
      100GbE Intel Wired LAN Driver Updates 2019-09-03
      
      This series contains updates to ice driver only.
      
      Anirudh adds the ability for the driver to handle EMP resets correctly
      by adding the logic to the existing ice_reset_subtask().
      
      Jeb fixes up the logic to properly free up the resources for a switch
      rule whether or not it was successful in the removal.
      
      Brett fixes up the reporting of ITR values to let the user know odd ITR
      values are not allowed.  Fixes the driver to only disable VLAN pruning
      on VLAN deletion when the VLAN being deleted is the last VLAN on the VF
      VSI.
      
      Chinh updates the driver to determine the TSA value from the priority
      value when in CEE mode.
      
      Bruce aligns the driver with the hardware specification by ensuring that
      a PF reset is done as part of the unload logic.  Also update the driver
      unloading field, based on the latest hardware specification, which
      allows us to remove an unnecessary endian conversion.  Moves #defines
      based on their need in the code.
      
      Jesse adds the current state of auto-negotiation in the link up message.
      In addition, adds additional information to inform the user of an issue
      with the topology/configuration of the link.
      
      Usha updates the driver to allow the maximum TCs that the firmware
      supports, rather than hard coding to a set value.
      
      Dave updates the DCB initialization flow to handle the case of an actual
      error during DCB init.  Updated the driver to report the current stats,
      even when the netdev is down, which aligns with our other drivers.
      
      Mitch fixes the VF reset code flows to ensure that it properly calls
      ice_dis_vsi_txq() to notify the firmware that the VF is being reset.
      
      Michal fixes the driver so the DCB is not enabled when the SW LLDP is
      activated, which was causing a communication issue with other NICs.  The
      problem lies in that DCB was being enabled without checking the number
      of TCs.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      2c1f9e26
    • David S. Miller's avatar
      Merge tag 'mlx5-updates-2019-09-01-v2' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux · 94810bd3
      David S. Miller authored
      Saeed Mahameed says:
      
      ====================
      mlx5-updates-2019-09-01  (Software steering support)
      
      Abstract:
      --------
      Mellanox ConnetX devices supports packet matching, packet modification and
      redirection. These functionalities are also referred to as flow-steering.
      To configure a steering rule, the rule is written to the device owned
      memory, this memory is accessed and cached by the device when processing
      a packet.
      Steering rules are constructed from multiple steering entries (STE).
      
      Rules are configured using the Firmware command interface. The Firmware
      processes the given driver command and translates them to STEs, then
      writes them to the device memory in the current steering tables.
      This process is slow due to the architecture of the command interface and
      the processing complexity of each rule.
      
      The highlight of this patchset is to cut the middle man (The firmware) and
      do steering rules programming into device directly from the driver, with
      no firmware intervention whatsoever.
      
      Motivation:
      -----------
      Software (driver managed) steering allows for high rule insertion rates
      compared to the FW steering described above, this is achieved by using
      internal RDMA writes to the device owned memory instead of the slow
      command interface to program steering rules.
      
      Software (driver managed) steering, doesn't depend on new FW
      for new steering functionality, new implementations can be done in the
      driver skipping the FW layer.
      
      Performance:
      ------------
      The insertion rate on a single core using the new approach allows
      programming ~300K rules per sec. (Done via direct raw test to the new mlx5
      sw steering layer, without any kernel layer involved).
      
      Test: TC L2 rules
      33K/s with Software steering (this patchset).
      5K/s  with FW and current driver.
      This will improve OVS based solution performance.
      
      Architecture and implementation details:
      ----------------------------------------
      Software steering will be dynamically selected via devlink device
      parameter. Example:
      $ devlink dev param show pci/0000:06:00.0 name flow_steering_mode
                pci/0000:06:00.0:
                name flow_steering_mode type driver-specific
                values:
                   cmode runtime value smfs
      
      mlx5 software steering module a.k.a (DR - Direct Rule) is implemented
      and contained in mlx5/core/steering directory and controlled by
      MLX5_SW_STEERING kconfig flag.
      
      mlx5 core steering layer (fs_core) already provides a shim layer for
      implementing different steering mechanisms, software steering will
      leverage that as seen at the end of this series.
      
      When Software Steering for a specific steering domain
      (NIC/RDMA/Vport/ESwitch, etc ..) is supported, it will cause rules
      targeting this domain to be created using  SW steering instead of FW.
      
      The implementation includes:
      Domain - The steering domain is the object that all other object resides
          in. It holds the memory allocator, send engine, locks and other shared
          data needed by lower objects such as table, matcher, rule, action.
          Each domain can contain multiple tables. Domain is equivalent to
          namespaces e.g (NIC/RDMA/Vport/ESwitch, etc ..) as implemented
          currently in mlx5_core fs_core (flow steering core).
      
      Table - Table objects are used for holding multiple matchers, each table
          has a level used to prevent processing loops. Packets are being
          directed to this table once it is set as the root table, this is done
          by fs_core using a FW command. A packet is being processed inside the
          table matcher by matcher until a successful hit, otherwise the packet
          will perform the default action.
      
      Matcher - Matchers objects are used to specify the fields mask for
          matching when processing a packet. A matcher belongs to a table, each
          matcher can hold multiple rules, each rule with different matching
          values corresponding to the matcher mask. Each matcher has a priority
          used for rule processing order inside the table.
      
      Action - Action objects are created to specify different steering actions
          such as count, reformat (encapsulate, decapsulate, ...), modify
          header, forward to table and many other actions. When creating a rule
          a sequence of actions can be provided to be executed on a successful
          match.
      
      Rule - Rule objects are used to specify a specific match on packets as
          well as the actions that should be executed. A rule belongs to a
          matcher.
      
      STE - This layer is used to hold the specific STE format for the device
          and to convert the requested rule to STEs. Each rule is constructed of
          an STE chain, Multiple rules construct a steering graph. Each node in
          the graph is a hash table containing multiple STEs. The index of each
          STE in the hash table is being calculated using a CRC32 hash function.
      
      Memory pool - Used for managing and caching device owned memory for rule
          insertion. The memory is being allocated using DM (device memory) API.
      
      Communication with device - layer for standard RDMA operation using  RC QP
          to configure the device steering.
      
      Command utility - This module holds all of the FW commands that are
          required for SW steering to function.
      
      Patch planning and files:
      -------------------------
      1) First patch, adds the support to Add flow steering actions to fs_cmd
      shim layer.
      
      2) Next 12 patch will add a file per each Software steering
      functionality/module as described above. (See patches with title: DR, *)
      
      3) Add CONFIG_MLX5_SW_STEERING for software steering support and enable
      build with the new files
      
      4) Next two patches will add the support for software steering in mlx5
      steering shim layer
      net/mlx5: Add API to set the namespace steering mode
      net/mlx5: Add direct rule fs_cmd implementation
      
      5) Last two patches will add the new devlink parameter to select mlx5
      steering mode, will be valid only for switchdev mode for now.
      Two modes are supported:
          1. DMFS - Device managed flow steering
          2. SMFS - Software/Driver managed flow steering.
      
          In the DMFS mode, the HW steering entities are created through the
          FW. In the SMFS mode this entities are created though the driver
          directly.
      
          The driver will use the devlink steering mode only if the steering
          domain supports it, for now SMFS will manages only the switchdev
          eswitch steering domain.
      
          User command examples:
          - Set SMFS flow steering mode::
      
              $ devlink dev param set pci/0000:06:00.0 name flow_steering_mode value "smfs" cmode runtime
      
          - Read device flow steering mode::
      
              $ devlink dev param show pci/0000:06:00.0 name flow_steering_mode
                pci/0000:06:00.0:
                name flow_steering_mode type driver-specific
                values:
                   cmode runtime value smfs
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      94810bd3
    • Brett Creeley's avatar
      ice: Only disable VLAN pruning for the VF when all VLANs are removed · cd186e51
      Brett Creeley authored
      Currently if the VF adds a VLAN, VLAN pruning will be enabled for that VSI.
      Also, when a VLAN gets deleted it will disable VLAN pruning even if other
      VLAN(s) exists for the VF. Fix this by only disabling VLAN pruning on the
      VF VSI when removing the last VF (i.e. vf->num_vlan == 0).
      Signed-off-by: default avatarBrett Creeley <brett.creeley@intel.com>
      Tested-by: default avatarAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: default avatarJeff Kirsher <jeffrey.t.kirsher@intel.com>
      cd186e51
    • Michal Swiatkowski's avatar
      ice: Remove enable DCB when SW LLDP is activated · 03bba020
      Michal Swiatkowski authored
      Remove code that enables DCB in initialization when SW LLDP is
      activated. DCB flag is set or reset before in ice_init_pf_dcb
      based on number of TCs. So there is not need to overwrite it.
      
      Setting DCB without checking number of TCs can cause communication
      problems with other cards. Host card sends packet with VLAN priority
      tag, but client card doesn't strip this tag and ping doesn't work.
      Signed-off-by: default avatarMichal Swiatkowski <michal.swiatkowski@intel.com>
      Signed-off-by: default avatarTony Nguyen <anthony.l.nguyen@intel.com>
      Tested-by: default avatarAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: default avatarJeff Kirsher <jeffrey.t.kirsher@intel.com>
      03bba020
    • Dave Ertman's avatar
      ice: Report stats when VSI is down · 3d57fd10
      Dave Ertman authored
      There is currently a check in get_ndo_stats that
      returns before updating stats if the VSI is down
      or there are no Tx or Rx queues.  This causes the
      netdev to report zero stats with the netdev is down.
      
      Remove the check so that the behavior of reporting
      stats is the same as it was in IXGBE.
      Signed-off-by: default avatarDave Ertman <david.m.ertman@intel.com>
      Tested-by: default avatarAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: default avatarJeff Kirsher <jeffrey.t.kirsher@intel.com>
      3d57fd10
    • Mitch Williams's avatar
      ice: Always notify FW of VF reset · 06914ac2
      Mitch Williams authored
      The call to ice_dis_vsi_txq() acts as the notification to the firmware
      that the VF is being reset. Because of this, we need to make this call
      every time we reset, regardless of whatever else we do to stop the Tx
      queues.
      
      Without this change, VF resets would fail to complete on interfaces that
      were up and running.
      Signed-off-by: default avatarMitch Williams <mitch.a.williams@intel.com>
      Tested-by: default avatarAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: default avatarJeff Kirsher <jeffrey.t.kirsher@intel.com>
      06914ac2
    • Dave Ertman's avatar
      ice: Correctly handle return values for init DCB · 473ca574
      Dave Ertman authored
      In the init path for DCB, the call to ice_init_dcb()
      can return a non-zero value for either an actual
      error, or due to the FW lldp engine being stopped.
      
      We are currently treating all non-zero values only as
      an indication that the FW LLDP engine is stopped.
      
      Check for an actual error in the DCB init flow.
      Signed-off-by: default avatarDave Ertman <david.m.ertman@intel.com>
      Tested-by: default avatarAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: default avatarJeff Kirsher <jeffrey.t.kirsher@intel.com>
      473ca574
  2. 03 Sep, 2019 28 commits
  3. 02 Sep, 2019 2 commits
    • David S. Miller's avatar
      Merge branch 'mvpp2-per-cpu-buffers' · 67538eb5
      David S. Miller authored
      Matteo Croce says:
      
      ====================
      mvpp2: per-cpu buffers
      
      This patchset workarounds an PP2 HW limitation which prevents to use
      per-cpu rx buffers.
      The first patch is just a refactor to prepare for the second one.
      The second one allocates percpu buffers if the following conditions are met:
      - CPU number is less or equal 4
      - no port is using jumbo frames
      
      If the following conditions are not met at load time, of jumbo frame is enabled
      later on, the shared allocation is reverted.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      67538eb5
    • Matteo Croce's avatar
      mvpp2: percpu buffers · 7d04b0b1
      Matteo Croce authored
      Every mvpp2 unit can use up to 8 buffers mapped by the BM (the HW buffer
      manager). The HW will place the frames in the buffer pool depending on the
      frame size: short (< 128 bytes), long (< 1664) or jumbo (up to 9856).
      
      As any unit can have up to 4 ports, the driver allocates only 2 pools,
      one for small and one long frames, and share them between ports.
      When the first port MTU is set higher than 1664 bytes, a third pool is
      allocated for jumbo frames.
      
      This shared allocation makes impossible to use percpu allocators,
      and creates contention between HW queues.
      
      If possible, i.e. if the number of possible CPU are less than 8 and jumbo
      frames are not used, switch to a new scheme: allocate 8 per-cpu pools for
      short and long frames and bind every pool to an RXQ.
      
      When the first port MTU is set higher than 1664 bytes, the allocation
      scheme is reverted to the old behaviour (3 shared pools), and when all
      ports MTU are lowered, the per-cpu buffers are allocated again.
      Signed-off-by: default avatarMatteo Croce <mcroce@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      7d04b0b1