- 20 Sep, 2022 40 commits
-
-
Paolo Abeni authored
Vladimir Oltean says: ==================== DSA changes for multiple CPU ports (part 4) Those who have been following part 1: https://patchwork.kernel.org/project/netdevbpf/cover/20220511095020.562461-1-vladimir.oltean@nxp.com/ part 2: https://patchwork.kernel.org/project/netdevbpf/cover/20220521213743.2735445-1-vladimir.oltean@nxp.com/ and part 3: https://patchwork.kernel.org/project/netdevbpf/cover/20220819174820.3585002-1-vladimir.oltean@nxp.com/ will know that I am trying to enable the second internal port pair from the NXP LS1028A Felix switch for DSA-tagged traffic via "ocelot-8021q". This series represents the final part of that effort. We have: - the introduction of new UAPI in the form of IFLA_DSA_MASTER, the iproute2 patch for which is here: https://patchwork.kernel.org/project/netdevbpf/patch/20220904190025.813574-1-vladimir.oltean@nxp.com/ - preparation for LAG DSA masters in terms of suppressing some operations for masters in the DSA core that simply don't make sense when those masters are a bonding/team interface - handling all the net device events that occur between DSA and a LAG DSA master, including migration to a different DSA master when the current master joins a LAG, or the LAG gets destroyed - updating documentation - adding an implementation for NXP LS1028A, where things are insanely complicated due to hardware limitations. We have 2 tagging protocols: * the native "ocelot" protocol (NPI port mode). This does not support CPU ports in a LAG, and supports a single DSA master. The DSA master can be changed between eno2 (2.5G) and eno3 (1G), but all ports must be down during the changing process, and user ports assigned to the old DSA master will refuse to come up if the user requests that during a "transient" state. * the "ocelot-8021q" software-defined protocol, where the Ethernet ports connected to the CPU are not actually "god mode" ports as far as the hardware is concerned. So here, static assignment between user and CPU ports is possible by editing the PGID_SRC masks for the port-based forwarding matrix, and "CPU ports in a LAG" simply means "a LAG like any other". The series was regression-tested on LS1028A using the local_termination.sh kselftest, in most of the possible operating modes and tagging protocols. I have not done a detailed performance evaluation yet, but using LAG, is possible to exceed the termination bandwidth of a single CPU port in an iperf3 test with multiple senders and multiple receivers. v1 at: https://patchwork.kernel.org/project/netdevbpf/cover/20220830195932.683432-1-vladimir.oltean@nxp.com/ Previous (older) RFC at: https://lore.kernel.org/netdev/20220523104256.3556016-1-olteanv@gmail.com/ ==================== Link: https://lore.kernel.org/r/20220911010706.2137967-1-vladimir.oltean@nxp.comSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
Vladimir Oltean authored
Changing the DSA master means different things depending on the tagging protocol in use. For NPI mode ("ocelot" and "seville"), there is a single port which can be configured as NPI, but DSA only permits changing the CPU port affinity of user ports one by one. So changing a user port to a different NPI port globally changes what the NPI port is, and breaks the user ports still using the old one. To address this while still permitting the change of the NPI port, require that the user ports which are still affine to the old NPI port are down, and cannot be brought up until they are all affine to the same NPI port. The tag_8021q mode ("ocelot-8021q") is more flexible, in that each user port can be freely assigned to one CPU port or to the other. This works by filtering host addresses towards both tag_8021q CPU ports, and then restricting the forwarding from a certain user port only to one of the two tag_8021q CPU ports. Additionally, the 2 tag_8021q CPU ports can be placed in a LAG. This works by enabling forwarding via PGID_SRC from a certain user port towards the logical port ID containing both tag_8021q CPU ports, but then restricting forwarding per packet, via the LAG hash codes in PGID_AGGR, to either one or the other. When we change the DSA master to a LAG device, DSA guarantees us that the LAG has at least one lower interface as a physical DSA master. But DSA masters can come and go as lowers of that LAG, and ds->ops->port_change_master() will not get called, because the DSA master is still the same (the LAG). So we need to hook into the ds->ops->port_lag_{join,leave} calls on the CPU ports and update the logical port ID of the LAG that user ports are assigned to. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Vladimir Oltean authored
DSA now supports multiple CPU ports, explain the use cases that are covered, the new UAPI, the permitted degrees of freedom, the driver API, and remove some old "hanging fruits". Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Vladimir Oltean authored
There are 2 ways in which a DSA user port may become handled by 2 CPU ports in a LAG: (1) its current DSA master joins a LAG ip link del bond0 && ip link add bond0 type bond mode 802.3ad ip link set eno2 master bond0 When this happens, all user ports with "eno2" as DSA master get automatically migrated to "bond0" as DSA master. (2) it is explicitly configured as such by the user # Before, the DSA master was eno3 ip link set swp0 type dsa master bond0 The design of this configuration is that the LAG device dynamically becomes a DSA master through dsa_master_setup() when the first physical DSA master becomes a LAG slave, and stops being so through dsa_master_teardown() when the last physical DSA master leaves. A LAG interface is considered as a valid DSA master only if it contains existing DSA masters, and no other lower interfaces. Therefore, we mainly rely on method (1) to enter this configuration. Each physical DSA master (LAG slave) retains its dev->dsa_ptr for when it becomes a standalone DSA master again. But the LAG master also has a dev->dsa_ptr, and this is actually duplicated from one of the physical LAG slaves, and therefore needs to be balanced when LAG slaves come and go. To the switch driver, putting DSA masters in a LAG is seen as putting their associated CPU ports in a LAG. We need to prepare cross-chip host FDB notifiers for CPU ports in a LAG, by calling the driver's ->lag_fdb_add method rather than ->port_fdb_add. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Vladimir Oltean authored
Drivers could refuse to offload a LAG configuration for a variety of reasons, mainly having to do with its TX type. Additionally, since DSA masters may now also be LAG interfaces, and this will translate into a call to port_lag_join on the CPU ports, there may be extra restrictions there. Propagate the netlink extack to this DSA method in order for drivers to give a meaningful error message back to the user. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Vladimir Oltean authored
These don't work (print a harmless error about the operation failing) and make little sense to have anyway, because when a LAG DSA master goes away, we will introduce logic to move our CPU port back to the first physical DSA master. So suppress these device links in preparation for adding support for LAG DSA masters. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Vladimir Oltean authored
Similar to the discussion about tracking the admin/oper state of LAG DSA masters, we have the problem here that struct dsa_port *cpu_dp caches a single pair of orig_ethtool_ops and netdev_ops pointers. So if we call dsa_master_setup(bond0, cpu_dp) where cpu_dp is also the dev->dsa_ptr of one of the physical DSA masters, we'd effectively overwrite what we cached from that physical netdev with what replaced from the bonding interface. We don't need DSA ethtool stats on the bonding interface when used as DSA master, it's good enough to have them just on the physical DSA masters, so suppress this logic. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Vladimir Oltean authored
We store information about the DSA master's state in cpu_dp->master_admin_up and cpu_dp->master_oper_up, and this assumes a bijective association between a CPU port and a DSA master. However, when we have CPU ports in a LAG (and DSA masters in a LAG too), the way in which we set up things is that the physical DSA masters still have dev->dsa_ptr pointing to our cpu_dp, but the bonding/team device itself also has its dev->dsa_ptr pointing towards one of the CPU port structures (the first one). So logically speaking, that first cpu_dp can't keep track of both the physical master's admin/oper state, and of the bonding master's state. This isn't even needed; the reason why we keep track of the DSA master's state is to know when it is available for Ethernet-based register access. For that use case, we don't even need LAG; we just need to decide upon one of the physical DSA masters (if there is more than 1 available) and use that. This change suppresses dsa_tree_master_{admin,oper}_state_change() calls on LAG DSA masters (which will be supported in a future change), to allow the tracking of just physical DSA masters. Link: https://lore.kernel.org/netdev/628cc94d.1c69fb81.15b0d.422d@mx.google.com/Suggested-by: Christian Marangi <ansuelsmth@gmail.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Vladimir Oltean authored
Some DSA switches have multiple CPU ports, which can be used to improve CPU termination throughput, but DSA, through dsa_tree_setup_cpu_ports(), sets up only the first one, leading to suboptimal use of hardware. The desire is to not change the default configuration but to permit the user to create a dynamic mapping between individual user ports and the CPU port that they are served by, configurable through rtnetlink. It is also intended to permit load balancing between CPU ports, and in that case, the foreseen model is for the DSA master to be a bonding interface whose lowers are the physical DSA masters. To that end, we create a struct rtnl_link_ops for DSA user ports with the "dsa" kind. We expose the IFLA_DSA_MASTER link attribute that contains the ifindex of the newly desired DSA master. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Vladimir Oltean authored
There is a desire to support for DSA masters in a LAG. That configuration is intended to work by simply enslaving the master to a bonding/team device. But the physical DSA master (the LAG slave) still has a dev->dsa_ptr, and that cpu_dp still corresponds to the physical CPU port. However, we would like to be able to retrieve the LAG that's the upper of the physical DSA master. In preparation for that, introduce a helper called dsa_port_get_master() that replaces all occurrences of the dp->cpu_dp->master pattern. The distinction between LAG and non-LAG will be made later within the helper itself. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Vladimir Oltean authored
Some network drivers use __dev_mc_sync()/__dev_uc_sync() and therefore program the hardware only with addresses with a non-zero sync_cnt. Some of the above drivers also need to save/restore the address filtering lists when certain events happen, and they need to walk through the struct net_device :: uc and struct net_device :: mc lists. But these lists contain unsynced addresses too. To keep the appearance of an elementary form of data encapsulation, provide iterators through these lists that only look at entries with a non-zero sync_cnt, instead of filtering entries out from device drivers. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Paolo Abeni authored
Tony Nguyen says: ==================== ice: L2TPv3 offload support Wojciech Drewek says: Add support for dissecting L2TPv3 session id in flow dissector. Add support for this field in tc-flower and support offloading L2TPv3. Finally, add support for hardware offload of L2TPv3 packets based on session id in switchdev mode in ice driver. Example filter: # tc filter add dev $PF1 ingress prio 1 protocol ip \ flower \ ip_proto l2tp \ l2tpv3_sid 1234 \ skip_sw \ action mirred egress redirect dev $VF1_PR Changes in iproute2 are required to use the new fields. ICE COMMS DDP package is required to create a filter in ice. COMMS DDP package contains profiles of more advanced protocols. Without COMMS DDP package hw offload will not work, however sw offload will still work. ==================== Link: https://lore.kernel.org/r/20220908171644.1282191-1-anthony.l.nguyen@intel.comSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
Marcin Szycik authored
Add support for offloading packets based on L2TPv3 session id in switchdev mode. Example filter: tc filter add dev $PF1 ingress prio 1 protocol ip flower ip_proto l2tp \ l2tpv3_sid 1234 skip_sw action mirred egress redirect dev $VF1_PR Changes in iproute2 are required to be able to specify l2tpv3_sid. ICE COMMS DDP package is required to create a filter as it contains L2TPv3 profiles. Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Signed-off-by: Marcin Szycik <marcin.szycik@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Wojciech Drewek authored
Allow to offload L2TPv3 filters by adding flow_rule_match_l2tpv3. Drivers can extract L2TPv3 specific fields from now on. Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Wojciech Drewek authored
Add support for matching on L2TPv3 session ID. Session ID can be specified only when ip proto was set to IPPROTO_L2TP. Example filter: # tc filter add dev $PF1 ingress prio 1 protocol ip \ flower \ ip_proto l2tp \ l2tpv3_sid 1234 \ skip_sw \ action mirred egress redirect dev $VF1_PR Acked-by: Guillaume Nault <gnault@redhat.com> Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Wojciech Drewek authored
Allow to dissect L2TPv3 specific field which is: - session ID (32 bits) L2TPv3 might be transported over IP or over UDP, this implementation is only about L2TPv3 over IP. IP protocol carries L2TPv3 when ip_proto is IPPROTO_L2TP (115). Acked-by: Guillaume Nault <gnault@redhat.com> Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Wojciech Drewek authored
IPPROTO_L2TP is currently defined in l2tp.h, but most of ip protocols are defined in in.h file. Move it there in order to keep code clean. Acked-by: Guillaume Nault <gnault@redhat.com> Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Jules Irenge authored
coccinelle reports a warning WARNING: casting value returned by memory allocation function to (struct octep_rx_buffer *) is useless. To fix this the useless cast is removed. Signed-off-by: Jules Irenge <jbi.octave@gmail.com> Link: https://lore.kernel.org/r/Yx+sr9o0uylXVcOl@playgroundSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nathan Huckleberry authored
All usages of the vport_ops struct have the .send field set to dev_queue_xmit or internal_dev_recv. Since most usages are set to dev_queue_xmit, the function hook should match the signature of dev_queue_xmit. The only call to vport_ops->send() is in net/openvswitch/vport.c and it throws away the return value. This mismatched return type breaks forward edge kCFI since the underlying function definition does not match the function hook definition. Reported-by: Dan Carpenter <error27@gmail.com> Link: https://github.com/ClangBuiltLinux/linux/issues/1703 Cc: llvm@lists.linux.dev Signed-off-by: Nathan Huckleberry <nhuck@google.com> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Eelco Chaudron <echaudro@redhat.com> Link: https://lore.kernel.org/r/20220913230739.228313-1-nhuck@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nathan Huckleberry authored
The ndo_start_xmit field in net_device_ops is expected to be of type netdev_tx_t (*ndo_start_xmit)(struct sk_buff *skb, struct net_device *dev). The mismatched return type breaks forward edge kCFI since the underlying function definition does not match the function hook definition. The return type of t7xx_ccmni_start_xmit should be changed from int to netdev_tx_t. Reported-by: Dan Carpenter <error27@gmail.com> Link: https://github.com/ClangBuiltLinux/linux/issues/1703 Cc: llvm@lists.linux.dev Signed-off-by: Nathan Huckleberry <nhuck@google.com> Acked-by: Sergey Ryazanov <ryazanov.s.a@gmail.com> Link: https://lore.kernel.org/r/20220912214510.929070-1-nhuck@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nathan Huckleberry authored
The ndo_start_xmit field in net_device_ops is expected to be of type netdev_tx_t (*ndo_start_xmit)(struct sk_buff *skb, struct net_device *dev). The mismatched return type breaks forward edge kCFI since the underlying function definition does not match the function hook definition. The return type of ipc_wwan_link_transmit should be changed from int to netdev_tx_t. Reported-by: Dan Carpenter <error27@gmail.com> Link: https://github.com/ClangBuiltLinux/linux/issues/1703 Cc: llvm@lists.linux.dev Signed-off-by: Nathan Huckleberry <nhuck@google.com> Acked-by: Sergey Ryazanov <ryazanov.s.a@gmail.com> Link: https://lore.kernel.org/r/20220912214455.929028-1-nhuck@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nathan Huckleberry authored
The ndo_start_xmit field in net_device_ops is expected to be of type netdev_tx_t (*ndo_start_xmit)(struct sk_buff *skb, struct net_device *dev). The mismatched return type breaks forward edge kCFI since the underlying function definition does not match the function hook definition. The return type of korina_send_packet should be changed from int to netdev_tx_t. Reported-by: Dan Carpenter <error27@gmail.com> Link: https://github.com/ClangBuiltLinux/linux/issues/1703 Cc: llvm@lists.linux.dev Signed-off-by: Nathan Huckleberry <nhuck@google.com> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/r/20220912214344.928925-1-nhuck@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nathan Huckleberry authored
The ndo_start_xmit field in net_device_ops is expected to be of type netdev_tx_t (*ndo_start_xmit)(struct sk_buff *skb, struct net_device *dev). The mismatched return type breaks forward edge kCFI since the underlying function definition does not match the function hook definition. The return type of liteeth_start_xmit should be changed from int to netdev_tx_t. Reported-by: Dan Carpenter <error27@gmail.com> Link: https://github.com/ClangBuiltLinux/linux/issues/1703 Cc: llvm@lists.linux.dev Signed-off-by: Nathan Huckleberry <nhuck@google.com> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Gabriel Somlo <gsomlo@gmail.com> Link: https://lore.kernel.org/r/20220912195307.812229-1-nhuck@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nathan Huckleberry authored
The ndo_start_xmit field in net_device_ops is expected to be of type netdev_tx_t (*ndo_start_xmit)(struct sk_buff *skb, struct net_device *dev). The mismatched return type breaks forward edge kCFI since the underlying function definition does not match the function hook definition. The return type of emac_dev_xmit should be changed from int to netdev_tx_t. Reported-by: Dan Carpenter <error27@gmail.com> Link: https://github.com/ClangBuiltLinux/linux/issues/1703 Cc: llvm@lists.linux.dev Signed-off-by: Nathan Huckleberry <nhuck@google.com> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/r/20220912195023.810319-1-nhuck@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nathan Huckleberry authored
The ndo_start_xmit field in net_device_ops is expected to be of type netdev_tx_t (*ndo_start_xmit)(struct sk_buff *skb, struct net_device *dev). The mismatched return type breaks forward edge kCFI since the underlying function definition does not match the function hook definition. The return type of dm9000_start_xmit should be changed from int to netdev_tx_t. Reported-by: Dan Carpenter <error27@gmail.com> Link: https://github.com/ClangBuiltLinux/linux/issues/1703 Cc: llvm@lists.linux.dev Signed-off-by: Nathan Huckleberry <nhuck@google.com> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/r/20220912194722.809525-1-nhuck@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nathan Huckleberry authored
The ndo_start_xmit field in net_device_ops is expected to be of type netdev_tx_t (*ndo_start_xmit)(struct sk_buff *skb, struct net_device *dev). The mismatched return type breaks forward edge kCFI since the underlying function definition does not match the function hook definition. The return type of ax88796c_start_xmit should be changed from int to netdev_tx_t. Reported-by: Dan Carpenter <error27@gmail.com> Link: https://github.com/ClangBuiltLinux/linux/issues/1703 Cc: llvm@lists.linux.dev Signed-off-by: Nathan Huckleberry <nhuck@google.com> Acked-by: Lukasz Stelmach <l.stelmach@samsung.com> Link: https://lore.kernel.org/r/20220912194031.808425-1-nhuck@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
git://git.open-mesh.org/linux-mergeJakub Kicinski authored
Simon Wunderlich says: ==================== This cleanup patchset includes the following patches: - bump version strings, by Simon Wunderlich - drop unused headers in trace.h, by Sven Eckelmann - drop initialization of flexible ethtool_link_ksettings, by Sven Eckelmann - remove unused struct definitions, by Marek Lindner * tag 'batadv-next-pullrequest-20220916' of git://git.open-mesh.org/linux-merge: batman-adv: remove unused struct definitions batman-adv: Drop initialization of flexible ethtool_link_ksettings batman-adv: Drop unused headers in trace.h batman-adv: Start new development cycle ==================== Link: https://lore.kernel.org/r/20220916161454.1413154-1-sw@simonwunderlich.deSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Petr Machata says: ==================== mlxsw: Adjust QOS tests for Spectrum-4 testing Amit writes: Quality Of Service tests create congestion and verify the switch behavior. To create congestion, they need to have more traffic than the port can handle, so some of them force 1Gbps speed. The tests assume that 1Gbps speed is supported. Spectrum-4 ASIC will not support this speed in all ports, so to be able to run QOS tests there, some adjustments are required. Patch set overview: Patch #1 adjusts qos_ets_strict, qos_mc_aware and sch_ets tests. Patch #2 adjusts RED tests. Patch #3 extends devlink_lib to support querying maximum pool size. Patch #4 adds a test which can be used instead of qos_burst and do not assume that 1Gbps speed is supported. Patch #5 removes qos_burst test. ==================== Link: https://lore.kernel.org/r/cover.1663152826.git.petrm@nvidia.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Amit Cohen authored
The previous patch added a test which can be used instead of qos_burst.sh. Remove this test. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Amit Cohen authored
Add an equivalent test to qos_burst, the test's purpose is same, but the new test uses simpler topology and does not require forcing low speed. In addition, it can be run Spectrum-2 and not only Spectrum-3+. The idea is to use a shaper in order to limit the traffic and create congestion. qos_burst test uses small pool, sends many small packets, and verify that packets are not dropped, which means that many descriptors can be handled. This test should check the change that commit c864769a ("mlxsw: Configure descriptor buffers") pushed. Instead, the new test tries to use more than 85% of maximum supported descriptors. The idea is to use big pool (as much as the ASIC supports), such that the pool size does not limit the traffic, then send many small packets, which means that many descriptors are used, and check how many packets the switch can handle. The usage of shaper allows to run the test in all ASICs, regardless of the CPU abilities, as it is able to create the congestion with low rate of packets. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Amit Cohen authored
The maximum pool size is exposed via 'devlink sb' command. The next patch will add a test which increases some pools to the maximum size. Add a function to query the value. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Amit Cohen authored
QOS tests create congestion and verify the switch behavior. To create congestion, they need to have more traffic than the port can handle, so some of them force 1Gbps speed. The tests assume that 1Gbps speed is supported, otherwise, they will fail. Spectrum-4 ASIC will not support this speed in all ports, so to be able to run the tests there, some adjustments are required. Use shapers to limit the traffic instead of forcing speed. Note that for several ports, the speed configuration is just for autoneg issues, so shaper is not needed instead. The tests already use ETS qdisc as a root and RED qdiscs as children. Add a new TBF shaper to limit the rate of traffic, and use it as a root qdisc, then save the previous hierarchy of qdiscs under the new TBF root. In some ASICs, the shapers do not limit the traffic as accurately as forcing speed. To make the tests stable, allow the backlog size to be up to +-10% of the threshold. The aim of the tests is to make sure that with backlog << threshold, there are no drops, and that packets are dropped somewhere in vicinity of the configured threshold. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Amit Cohen authored
QOS tests create congestion and verify the switch behavior. To create congestion, they need to have more traffic than the port can handle, so some of them force 1Gbps speed. The tests assume that 1Gbps speed is supported, otherwise, they will fail. Spectrum-4 ASIC will not support this speed in all ports, so to be able to run QOS tests there, some adjustments are required. Use shapers to limit the traffic instead of forcing speed. Note that for several ports, the speed configuration is just for autoneg issues, so shaper is not needed instead. In tests that already use shapers, set the existing shaper to be a child of a new TBF shaper which is added as a root qdisc and acts as a port shaper. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Vladimir Oltean says: ==================== Remove label = "cpu" from DSA dt-bindings As explained in more detail in patch 1/3, label = "cpu" is not part of DSA's device tree bindings, yet we have some checks in the dt-schema for mt7530 which are written as if it was. Reformulate those checks, and remove all occurrences of this seemingly used, but actually unused, property from the binding examples. ==================== Link: https://lore.kernel.org/r/20220912175058.280386-1-vladimir.oltean@nxp.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
This is not used by the DSA dt-binding, so remove it from all examples. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Acked-by: Arınç ÜNAL <arinc.unal@arinc9.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
The common dsa-port.yaml does this (and more) since commit 2ec2fb83 ("dt-bindings: net: dsa: make phylink bindings required for CPU/DSA ports"). Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Arınç ÜNAL <arinc.unal@arinc9.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
The fact that some DSA device trees use 'label = "cpu"' for the CPU port is nothing but blind cargo cult copying. The 'label' property was never part of the DSA DT bindings for anything except the user ports, where it provided a hint as to what name the created netdevs should use. DSA does use the "cpu" port label to identify a CPU port in dsa_port_parse(), but this is only for non-OF code paths (platform data). The proper way to identify a CPU port is to look at whether the 'ethernet' phandle is present. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Arınç ÜNAL <arinc.unal@arinc9.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Xiu Jianfeng authored
Add missing __init/__exit annotations to module init/exit funcs. Signed-off-by: Xiu Jianfeng <xiujianfeng@huawei.com> Link: https://lore.kernel.org/r/20220909091840.247946-1-xiujianfeng@huawei.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Gaosheng Cui authored
rxrpc_max_call_lifetime has been removed since commit a158bdd3 ("rxrpc: Fix call timeouts"), so remove it. Signed-off-by: Gaosheng Cui <cuigaosheng1@huawei.com> Link: https://lore.kernel.org/r/20220909064042.1149404-1-cuigaosheng1@huawei.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Yang Yingliang authored
Use kmemdup() helper instead of open-coding to simplify the code when allocate dev_addr. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Acked-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20220914140100.3795545-2-yangyingliang@huawei.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-