- 18 Mar, 2022 29 commits
-
-
Maxim Mikityanskiy authored
xmit_xdp_frame is extended to support sending fragmented XDP frames. The next commit will start using this functionality. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Maxim Mikityanskiy authored
When MPWQE is disabled, mlx5e_open_xdpsq() prefills the common fields of WQEs in the XDP SQ to save time when sending packets. mlx5e_xmit_xdp_frame() runs on the prefilled fields, however, sending multi buffer XDP frames would require changing some of these fields on a per-packet basis. Besides that, mlx5e_xmit_xdp_frame() will be used as a fallback to send multi buffer XDP frames when MPWQE is enabled (MPWQE can only handle linear packets). In order to prepare for XDP multi buffer support, this commit introduces a mode for mlx5e_xmit_xdp_frame() that fills all the fields itself. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Maxim Mikityanskiy authored
When MPWQE is disabled, mlx5e_open_xdpsq prefills the common fields of WQEs in the XDP SQ to save time when sending packets. One of such fields is eseg->inline_hdr.sz, which can be either 0 or MLX5E_XDP_MIN_INLINE, depending on the inline mode of the SQ. The inline mode can't change during the lifetime of the SQ, so setting this field again in mlx5e_xmit_xdp_frame is redundant. Moreover, the xmit function only sets it to MLX5E_XDP_MIN_INLINE, but not to 0 in the other case. This commit removes the redundant assignment in mlx5e_xmit_xdp_frame. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Maxim Mikityanskiy authored
The implementations of xmit_xdp_frame get the xdpi parameter of type struct mlx5e_xdp_info for the sole purpose of calling mlx5e_xdpi_fifo_push() on success. This commit moves this call outside of xmit_xdp_frame, shifting this responsibility to the caller. It will allow more fine-grained handling of XDP info for cases when an xdp_frame is fragmented. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Maxim Mikityanskiy authored
Use page_pool_set_dma_addr() to store the DMA address of a page inside struct page, in order to avoid passing struct mlx5e_dma_info to XDP handlers. Previously, struct mlx5e_dma_info was used to pass both the DMA address and the page, and it worked well for the single-fragment case. When XDP multi buffer is in use, and a fragmented xdp_frame has to be transmitted, the driver needs to know the DMA addresses of fragments, however, the array of fragments in struct skb_shared_info doesn't contain them. In order to pass the DMA addresses, the driver puts them into struct page itself, which is accessible from the array of fragments in struct skb_shared_info. The existing XDP handlers are modified to remove the dependency on struct mlx5e_dma_info. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Maxim Mikityanskiy authored
This commit adds XDP multi buffer support to the RX path in the non-linear legacy RQ mode. mlx5e_xdp_handle is called from mlx5e_skb_from_cqe_nonlinear. XDP_TX action for fragmented XDP frames is not yet supported and blocked. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Maxim Mikityanskiy authored
The implementation of XDP in mlx5e assumes that the frame size is equal to the page size. Force this limitation in the non-linear mode for XDP multi buffer. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Maxim Mikityanskiy authored
XDP multi buffer implementation in the kernel assumes that all fragments have the same size. bpf_xdp_frags_increase_tail uses this assumption to get the size of the last fragment, and __xdp_build_skb_from_frame uses it to calculate truesize as nr_frags * xdpf->frame_sz. The current implementation of mlx5e uses fragments of different size in non-linear legacy RQ. Specifically, the last fragment can be larger than the others. It's an optimization for packets smaller than MTU. This commit adapts mlx5e to the kernel limitations and makes it use fragments of the same size, in order to add support for XDP multi buffer. The change is applied only if XDP is active, otherwise the old optimization still applies. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Maxim Mikityanskiy authored
mlx5e_skb_from_cqe_nonlinear creates an xdp_buff first, putting the first fragment as the linear part, and the rest of fragments as fragments to struct skb_shared_info in the tailroom. Then it creates an SKB in place, based on the xdp_buff. The XDP program is not called in this commit yet. This commit contains no functional change, except the SKB is built over the whole frag_stride of the first fragment, instead of the minimal size required (headroom, data and skb_shared_info). Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Krasnov Arseniy Vladimirovich authored
Add test where sender sends two message, each with own data pattern. Reader tries to read first to broken buffer: it has three pages size, but middle page is unmapped. Then, reader tries to read second message to valid buffer. Test checks, that uncopied part of first message was dropped and thus not copied as part of second message. Signed-off-by: Krasnov Arseniy Vladimirovich <AVKrasnov@sberdevices.ru> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Krasnov Arseniy Vladimirovich authored
Test for receive timeout check: connection is established, receiver sets timeout, but sender does nothing. Receiver's 'read()' call must return EAGAIN. Signed-off-by: Krasnov Arseniy Vladimirovich <AVKrasnov@sberdevices.ru> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Merge tag 'wireless-next-2022-03-18' of git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next Kalle Valo says: ==================== wireless-next patches for v5.18 Third set of patches for v5.18. Smaller set this time, support for mt7921u and some work on MBSSID support. Also a workaround for rfkill userspace event. Major changes: mac80211 * MBSSID beacon handling in AP mode rfkill * make new event layout opt-in to workaround buggy user space rtlwifi * support On Networks N150 device id mt76 * mt7915: MBSSID and 6 GHz band support * new driver mt7921u ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Raju Lakkaraju says: ==================== net: lan743x: PCI11010 / PCI11414 devices This patch series continues with the addition of supported features for the Ethernet function of the PCI11010 / PCI11414 devices to the LAN743x driver. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raju Lakkaraju authored
Add support for PTP-IO Event Output (Periodic Output - perout) for PCI11010/PCI11414 chips Signed-off-by: Raju Lakkaraju <Raju.Lakkaraju@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raju Lakkaraju authored
PTP-IOs block provides for time stamping PTP-IO input events. PTP-IOs are numbered from 0 to 11. When a PTP-IO is enabled by the corresponding bit in the PTP-IO Capture Configuration Register, a rising or falling edge, respectively, will capture the 1588 Local Time Counter Signed-off-by: Raju Lakkaraju <Raju.Lakkaraju@microchip.com> Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raju Lakkaraju authored
Add new the OTP read and write access functions for PCI11010/PCI11414 chips PCI11010/PCI11414 OTP module register offsets are different from LAN743x OTP module Signed-off-by: Raju Lakkaraju <Raju.Lakkaraju@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raju Lakkaraju authored
Add new the EEPROM read and write access functions and system lock protection to access by devices for PCI11010/PCI11414 chips Signed-off-by: Raju Lakkaraju <Raju.Lakkaraju@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raju Lakkaraju authored
Tx 4 queue statistics display through ethtool application Signed-off-by: Raju Lakkaraju <Raju.Lakkaraju@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Johannes Berg authored
Again new complaints surfaced that we had broken the ABI here, although previously all the userspace tools had agreed that it was their mistake and fixed it. Yet now there are cases (e.g. RHEL) that want to run old userspace with newer kernels, and thus are broken. Since this is a bit of a whack-a-mole thing, change the whole extensibility scheme of rfkill to no longer just rely on the message lengths, but instead require userspace to opt in via a new ioctl to a given maximum event size that it is willing to understand. By default, set that to RFKILL_EVENT_SIZE_V1 (8), so that the behaviour for userspace not calling the ioctl will look as if it's just running on an older kernel. Fixes: 14486c82 ("rfkill: add a reason to the HW rfkill state") Cc: stable@vger.kernel.org # 5.11+ Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Kalle Valo <kvalo@kernel.org> Link: https://lore.kernel.org/r/20220316212749.16491491b270.Ifcb1950998330a596f29a2a162e00b7546a1d6d0@changeid
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linuxDavid S. Miller authored
Saeed Mahameed says: ==================== mlx5-updates-2022-03-17 1) From Maxim Mikityanskiy, Datapath improvements in preparation for XDP multi buffer This series contains general improvements for the datapath that are useful for the upcoming XDP multi buffer support: a. Non-linear legacy RQ: validate MTU for robustness, build the linear part of SKB over the first hardware fragment (instead of copying the packet headers), adjust headroom calculations to allow enabling headroom in the non-linear mode (useful for XDP multi buffer). b. XDP: do the XDP program test before function call, optimize parameters of mlx5e_xdp_handle. 2) From Rongwei Liu, DR, reduce steering memory usage Currently, mlx5 driver uses mlx5_htbl/chunk/ste to organize steering logic. However there is a little memory waste. This update targets to reduce steering memory footprint by: a. Adjust struct member layout. b. Remove duplicated indicator by using simple functions call. With 500k TX rules(3 ste) plus 500k RX rules(6 stes), these patches can save around 17% memory. 3) Three cleanup commits at the end of this series. =================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Vladimir Oltean says: ==================== Mirroring for Ocelot switches This series adds support for tc-matchall (port-based) and tc-flower (flow-based) offloading of the tc-mirred action. Support has been added for both the ocelot switchdev driver and felix DSA driver. ==================== Link: https://lore.kernel.org/r/20220316204144.2679277-1-vladimir.oltean@nxp.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
Gain support for port mirroring using tc-matchall by forwarding the calls to the ocelot switch library. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
Drivers might have error messages to propagate to user space, most common being that they support a single mirror port. Propagate the netlink extack so that they can inform user space in a verbal way of their limitations. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
Per-flow mirroring with the VCAP IS2 TCAM (in itself handled as an offload for tc-flower) is done by setting the MIRROR_ENA bit from the action vector of the filter. The packet is mirrored to the port mask configured in the ANA:ANA:MIRRORPORTS register (the same port mask as the destinations for port-based mirroring). Functionality was tested with: tc qdisc add dev swp3 clsact tc filter add dev swp3 ingress protocol ip \ flower skip_sw ip_proto icmp \ action mirred egress mirror dev swp1 and pinging through swp3, while seeing that the ICMP replies are mirrored towards swp1. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
Some VCAP filters utilize resources which are global to the switch, like for example VCAP IS2 policers take an index into a global policer pool. In commit c9a7fe12 ("net: mscc: ocelot: add action of police on vcap_is2"), Xiaoliang expressed this by hooking into the low-level ocelot_vcap_filter_add_to_block() and ocelot_vcap_block_remove_filter() functions, and allocating/freeing the policers from there. Evaluating the code, there probably isn't a better place, but we'll need to do something similar for the mirror ports, and the code will start to look even more hacked up than it is right now. Create two ocelot_vcap_filter_{add,del}_aux_resources() functions to contain the madness, and pollute less the body of other functions such as ocelot_vcap_filter_add_to_block(). Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
Ocelot switches perform port-based ingress mirroring if ANA:PORT:PORT_CFG field SRC_MIRROR_ENA is set, and egress mirroring if the port is in ANA:ANA:EMIRRORPORTS. Both ingress-mirrored and egress-mirrored frames are copied to the port mask from ANA:ANA:MIRRORPORTS. So the choice of limiting to a single mirror port via ocelot_mirror_get() and ocelot_mirror_put() may seem bizarre, but the hardware model doesn't map very well to the user space model. If the user wants to mirror the ingress of swp1 towards swp2 and the ingress of swp3 towards swp4, we'd have to program ANA:ANA:MIRRORPORTS with BIT(2) | BIT(4), and that would make swp1 be mirrored towards swp4 too, and swp3 towards swp2. But there are no tc-matchall rules to describe those actions. Now, we could offload a matchall rule with multiple mirred actions, one per desired mirror port, and force the user to stick to the multi-action rule format for subsequent matchall filters. But both DSA and ocelot have the flow_offload_has_one_action() check for the matchall offload, plus the fact that it will get cumbersome to cross-check matchall mirrors with flower mirrors (which will be added in the next patch). As a result, we limit the configuration to a single mirror port, with the possibility of lifting the restriction in the future. Frames injected from the CPU don't get egress-mirrored, since they are sent with the BYPASS bit in the injection frame header, and this bypasses the analyzer module (effectively also the mirroring logic). I don't know what to do/say about this. Functionality was tested with: tc qdisc add dev swp3 clsact tc filter add dev swp3 ingress \ matchall skip_sw \ action mirred egress mirror dev swp1 and pinging through swp3, while seeing that the ICMP replies are mirrored towards swp1. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
In preparation for adding port mirroring support to the ocelot driver, the dispatching function ocelot_setup_tc_cls_matchall() must be free of action-specific code. Move port policer creation and deletion to separate functions. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jonathan Lemon authored
An earlier patch mistakenly changed these variables from u32 to u16, leading to unintended truncation. Restore the original logic. Fixes: a509a7c6 ("ptp: ocp: Add support for selectable SMA directions.") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Jonathan Lemon <jonathan.lemon@gmail.com> Link: https://lore.kernel.org/r/20220316165347.599154-1-jonathan.lemon@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Oleksij Rempel authored
According to erratum described in DS80000687C[1]: "Module 2: Link drops with some EEE link partners.", we need to "Disable the EEE next page exchange in EEE Global Register 2" 1 - https://ww1.microchip.com/downloads/en/DeviceDoc/KSZ87xx-Errata-DS80000687C.pdfSigned-off-by: Oleksij Rempel <o.rempel@pengutronix.de> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Link: https://lore.kernel.org/r/20220316125529.1489045-1-o.rempel@pengutronix.deSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 17 Mar, 2022 11 commits
-
-
Jakub Kicinski authored
Tobias Waldekranz says: ==================== net: bridge: Multiple Spanning Trees The bridge has had per-VLAN STP support for a while now, since: https://lore.kernel.org/netdev/20200124114022.10883-1-nikolay@cumulusnetworks.com/ The current implementation has some problems: - The mapping from VLAN to STP state is fixed as 1:1, i.e. each VLAN is managed independently. This is awkward from an MSTP (802.1Q-2018, Clause 13.5) point of view, where the model is that multiple VLANs are grouped into MST instances. Because of the way that the standard is written, presumably, this is also reflected in hardware implementations. It is not uncommon for a switch to support the full 4k range of VIDs, but that the pool of MST instances is much smaller. Some examples: Marvell LinkStreet (mv88e6xxx): 4k VLANs, but only 64 MSTIs Marvell Prestera: 4k VLANs, but only 128 MSTIs Microchip SparX-5i: 4k VLANs, but only 128 MSTIs - By default, the feature is enabled, and there is no way to disable it. This makes it hard to add offloading in a backwards compatible way, since any underlying switchdevs have no way to refuse the function if the hardware does not support it - The port-global STP state has precedence over per-VLAN states. In MSTP, as far as I understand it, all VLANs will use the common spanning tree (CST) by default - through traffic engineering you can then optimize your network to group subsets of VLANs to use different trees (MSTI). To my understanding, the way this is typically managed in silicon is roughly: Incoming packet: .----.----.--------------.----.------------- | DA | SA | 802.1Q VID=X | ET | Payload ... '----'----'--------------'----'------------- | '->|\ .----------------------------. | +--> | VID | Members | ... | MSTI | PVID -->|/ |-----|---------|-----|------| | 1 | 0001001 | ... | 0 | | 2 | 0001010 | ... | 10 | | 3 | 0001100 | ... | 10 | '----------------------------' | .-----------------------------' | .------------------------. '->| MSTI | Fwding | Lrning | |------|--------|--------| | 0 | 111110 | 111110 | | 10 | 110111 | 110111 | '------------------------' What this is trying to show is that the STP state (whether MSTP is used, or ye olde STP) is always accessed via the VLAN table. If STP is running, all MSTI pointers in that table will reference the same index in the STP stable - if MSTP is running, some VLANs may point to other trees (like in this example). The fact that in the Linux bridge, the global state (think: index 0 in most hardware implementations) is supposed to override the per-VLAN state, is very awkward to offload. In effect, this means that when the global state changes to blocking, drivers will have to iterate over all MSTIs in use, and alter them all to match. This also means that you have to cache whether the hardware state is currently tracking the global state or the per-VLAN state. In the first case, you also have to cache the per-VLAN state so that you can restore it if the global state transitions back to forwarding. This series adds a new mst_enable bridge setting (as suggested by Nik) that can only be changed when no VLANs are configured on the bridge. Enabling this mode has the following effect: - The port-global STP state is used to represent the CST (Common Spanning Tree) (1/15) - Ingress STP filtering is deferred until the frame's VLAN has been resolved (1/15) - The preexisting per-VLAN states can no longer be controlled directly (1/15). They are instead placed under the MST module's control, which is managed using a new netlink interface (described in 3/15) - VLANs can br mapped to MSTIs in an arbitrary M:N fashion, using a new global VLAN option (2/15) Switchdev notifications are added so that a driver can track: - MST enabled state - VID to MSTI mappings - MST port states An offloading implementation is this provided for mv88e6xxx. ==================== Link: https://lore.kernel.org/r/20220316150857.2442916-1-tobias@waldekranz.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tobias Waldekranz authored
Allocate a SID in the STU for each MSTID in use by a bridge and handle the mapping of MSTIDs to VLANs using the SID field of each VTU entry. Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tobias Waldekranz authored
Export the raw STU data in a devlink region so that it can be inspected from userspace and compared to the current bridge configuration. Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tobias Waldekranz authored
In early LinkStreet silicon (e.g. 6095/6185), the per-VLAN STP states were kept in the VTU - there was no concept of a SID. Later, the information was split into two tables, where the VTU only tracked memberships and deferred the STP state tracking to the STU via a pointer (SID). This meant that a group of VLANs could share the same STU entry. Most likely, this was done to align with MSTP (802.1Q-2018, Clause 13), which is built on this principle. While the VTU is still 4k lines on most devices, the STU is capped at 64 entries. This means that the current stategy, updating STU info whenever a VTU entry is updated, can not easily support MSTP because: - The maximum number of VIDs would also be capped at 64, as we would have to allocate one SID for every VTU entry - even if many VLANs would effectively share the same MST. - MSTP updates would be unnecessarily slow as you would have to iterate over all VLANs that share the same MST. In order to support MSTP offloading in the future, manage the STU as a separate entity from the VTU. Only add support for newer hardware with separate VTU and STU. VTU-only devices can also be supported, but essentially this requires a software implementation of an STU (fanning out state changed to all VLANs tied to the same MST). Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tobias Waldekranz authored
Add the usual trampoline functionality from the generic DSA layer down to the drivers for MST state changes. When a state changes to disabled/blocking/listening, make sure to fast age any dynamic entries in the affected VLANs (those controlled by the MSTI in question). Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tobias Waldekranz authored
Add the usual trampoline functionality from the generic DSA layer down to the drivers for VLAN MSTI migrations. Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tobias Waldekranz authored
When joining a bridge where MST is enabled, we validate that the proper offloading support is in place, otherwise we fallback to software bridging. When then mode is changed on a bridge in which we are members, we refuse the change if offloading is not supported. At the moment we only check for configurable learning, but this will be further restricted as we support more MST related switchdev events. Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tobias Waldekranz authored
This is useful for switchdev drivers who are offloading MST states into hardware. As an example, a driver may wish to flush the FDB for a port when it transitions from forwarding to blocking - which means that the previous state must be discoverable. Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tobias Waldekranz authored
This is useful for switchdev drivers that might want to refuse to join a bridge where MST is enabled, if the hardware can't support it. Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tobias Waldekranz authored
br_mst_get_info answers the question: "On this bridge, which VIDs are mapped to the given MSTI?" This is useful in switchdev drivers, which might have to fan-out operations, relating to an MSTI, per VLAN. An example: When a port's MST state changes from forwarding to blocking, a driver may choose to flush the dynamic FDB entries on that port to get faster reconvergence of the network, but this should only be done in the VLANs that are managed by the MSTI in question. Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tobias Waldekranz authored
Generate a switchdev notification whenever an MST state changes. This notification is keyed by the VLANs MSTI rather than the VID, since multiple VLANs may share the same MST instance. Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-