- 05 Nov, 2020 12 commits
-
-
Sebastian Andrzej Siewior authored
mlx5_eq_async_int() uses in_irq() to decide whether eq::lock needs to be acquired and released with spin_[un]lock() or the irq saving/restoring variants. The usage of in_*() in drivers is phased out and Linus clearly requested that code which changes behaviour depending on context should either be seperated or the context be conveyed in an argument passed by the caller, which usually knows the context. mlx5_eq_async_int() knows the context via the action argument already so using it for the lock variant decision is a straight forward replacement for in_irq(). Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Saeed Mahameed authored
$ git ls-files *.[ch] | egrep drivers/net/ethernet/mellanox/ | \ xargs scripts/kernel-doc -none drivers/net/ethernet/mellanox/mlx5/core/fpga/sdk.h:57: warning: Enum value 'MLX5_FPGA_ACCESS_TYPE_I2C' not described ... drivers/net/ethernet/mellanox/mlx5/core/fpga/sdk.h:57: warning: Enum value 'MLX5_FPGA_ACCESS_TYPE_DONTCARE' not described ... drivers/net/ethernet/mellanox/mlx5/core/fpga/sdk.h:118: warning: Function parameter or member 'cb_arg' not described ... drivers/net/ethernet/mellanox/mlx5/core/fpga/sdk.h:160: warning: Function parameter or member 'conn' not described ... drivers/net/ethernet/mellanox/mlx5/core/fpga/sdk.h:160: warning: Excess function parameter 'fdev' description ... Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Reported-by: Moshe Shemesh <moshe@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
-
Saeed Mahameed authored
$ git ls-files *.[ch] | egrep drivers/net/ethernet/mellanox/ | \ xargs scripts/kernel-doc -none drivers/net/ethernet/mellanox/mlx4/fw_qos.h:144: warning: Function parameter or member 'in_param' not described ... drivers/net/ethernet/mellanox/mlx4/fw_qos.h:144: warning: Excess function parameter 'out_param' description ... Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Reported-by: Moshe Shemesh <moshe@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
-
Vladyslav Tarasiuk authored
Stop room is a space that may be taken by WQEs in the SQ during a packet transmit. It is used to check if next packet has enough room in the SQ. Stop room guarantees this packet can be served and if not, the queue is stopped, so no more packets are passed to the driver until it's ready. Currently, stop_room size is calculated and validated upon tx queues allocation. This makes it impossible to know if user provided valid input for certain parameters when interface is down. Instead, store stop_room in mlx5e_sq_param and create mlx5e_validate_params(), to validate its fields upon user input even when the interface is down. Signed-off-by: Vladyslav Tarasiuk <vladyslavt@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Track buddy's used ICM memory, and free it if all of the buddy's memory bacame unused. Do this only for STEs. MODIFY_ACTION buddies are much smaller, so in case there is a large amount of modify_header actions, which result in large amount of MODIFY_ACTION buddies, doing this cleanup during sync will result in performance hit while not freeing significant amount of memory. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Track the pool's hot ICM memory when freeing/allocating chunk, so that when checking if the sync is required, just check if the pool hot memory has reached the sync threshold. Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
When freeing chunks, we want to sync the steering so that all the "hot" memory will be written to ICM and all the chunks that are in the hot_list will be actually destroyed. When allocating from the pool, we don't have a need to sync the steering, as we're not freeing anything, and sync might just hurt the performance in terms of flow-per-second offloaded. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Till now in order to manage the ICM memory we used bucket mechanism, which kept a bucket per specified size (sizes were between 1 block to 2^21 blocks). Now changing that with buddy-system mechanism, which gives us much more flexible way to manage the ICM memory. Its biggest advantage over the bucket is by using the same ICM memory area for all the sizes of blocks, which reduces the memory consumption. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Add implementation of SW Steering variation of buddy allocator. The buddy system for ICM memory uses 2 main data structures: - Bitmap per order, that keeps the current state of allocated blocks for this order - Indicator for the number of available blocks per each order Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Remove flex parser from the matcher function names since the matcher should not be aware of such HW specific details. Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
We will support multiple STE versions. The existing naming is not suitable for newer versions. Removed the HW specific details and renamed with a more general names. Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Struct mlx5dr_action doesn't use this member Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
- 03 Nov, 2020 28 commits
-
-
Jakub Kicinski authored
Aleksandr Nogikh says: ==================== net, mac80211, kernel: enable KCOV remote coverage collection for 802.11 frame handling This patch series enables remote KCOV coverage collection during 802.11 frames processing. These changes make it possible to perform coverage-guided fuzzing in search of remotely triggerable bugs. Normally, KCOV collects coverage information for the code that is executed inside the system call context. It is easy to identify where that coverage should go and whether it should be collected at all by looking at the current process. If KCOV was enabled on that process, coverage will be stored in a buffer specific to that process. Howerever, it is not always enough as handling can happen elsewhere (e.g. in separate kernel threads). When it is impossible to infer KCOV-related info just by looking at the currently running process, one needs to manually pass some information to the code that should be instrumented. The information takes the form of 64 bit integers (KCOV remote handles). Zero is the special value that corresponds to an empty handle. More details on KCOV and remote coverage collection can be found in Documentation/dev-tools/kcov.rst. The series consists of three commits. 1. Apply a minor fix to kcov_common_handle() so that it returns a valid handle (zero) when called in an interrupt context. 2. Take the remote handle from KCOV and attach it to newly allocated SKBs as an skb extension. If the allocation happens inside a system call context, the SKB will be tied to the process that issued the syscall (if that process is interested in remote coverage collection). 3. Annotate the code that processes incoming 802.11 frames with kcov_remote_start()/kcov_remote_stop(). v5: * Collecting remote coverate at ieee80211_rx_list() instead of ieee80211_rx() v4: https://lkml.kernel.org/r/20201028182018.1780842-1-aleksandrnogikh@gmail.com * CONFIG_SKB_EXTENSIONS is now automatically selected by CONFIG_KCOV. * Elaborated on a minor optimization in skb_set_kcov_handle(). v3: https://lkml.kernel.org/r/20201026150851.528148-1-aleksandrnogikh@gmail.com * kcov_handle is now stored in skb extensions instead of sk_buff itself. * Updated the cover letter. v2: https://lkml.kernel.org/r/20201009170202.103512-1-a.nogikh@gmail.com * Moved KCOV annotations from ieee80211_tasklet_handler to ieee80211_rx. * Updated kcov_common_handle() to return 0 if it is called in interrupt context. * Updated the cover letter. v1: https://lkml.kernel.org/r/20201007101726.3149375-1-a.nogikh@gmail.com ==================== Link: https://lore.kernel.org/r/20201029173620.2121359-1-aleksandrnogikh@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Aleksandr Nogikh authored
Add KCOV remote annotations to ieee80211_iface_work() and ieee80211_rx_list(). This will enable coverage-guided fuzzing of mac80211 code that processes incoming 802.11 frames. Signed-off-by: Aleksandr Nogikh <nogikh@google.com> Reviewed-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Aleksandr Nogikh authored
Remote KCOV coverage collection enables coverage-guided fuzzing of the code that is not reachable during normal system call execution. It is especially helpful for fuzzing networking subsystems, where it is common to perform packet handling in separate work queues even for the packets that originated directly from the user space. Enable coverage-guided frame injection by adding kcov remote handle to skb extensions. Default initialization in __alloc_skb and __build_skb_around ensures that no socket buffer that was generated during a system call will be missed. Code that is of interest and that performs packet processing should be annotated with kcov_remote_start()/kcov_remote_stop(). An alternative approach is to determine kcov_handle solely on the basis of the device/interface that received the specific socket buffer. However, in this case it would be impossible to distinguish between packets that originated during normal background network processes or were intentionally injected from the user space. Signed-off-by: Aleksandr Nogikh <nogikh@google.com> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Aleksandr Nogikh authored
kcov_common_handle is a method that is used to obtain a "default" KCOV remote handle of the current process. The handle can later be passed to kcov_remote_start in order to collect coverage for the processing that is initiated by one process, but done in another. For details see Documentation/dev-tools/kcov.rst and comments in kernel/kcov.c. Presently, if kcov_common_handle is called in an IRQ context, it will return a handle for the interrupted process. This may lead to unreliable and incorrect coverage collection. Adjust the behavior of kcov_common_handle in the following way. If it is called in a task context, return the common handle for the currently running task. Otherwise, return 0. Signed-off-by: Aleksandr Nogikh <nogikh@google.com> Reviewed-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tom Rix authored
A semicolon is not needed after a switch statement. Signed-off-by: Tom Rix <trix@redhat.com> Link: https://lore.kernel.org/r/20201031153047.2147341-1-trix@redhat.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tom Rix authored
A semicolon is not needed after a switch statement. Signed-off-by: Tom Rix <trix@redhat.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20201101140528.2279424-1-trix@redhat.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tom Rix authored
A semicolon is not needed after a switch statement. Signed-off-by: Tom Rix <trix@redhat.com> Link: https://lore.kernel.org/r/20201101140720.2280013-1-trix@redhat.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tom Rix authored
A semicolon is not needed after a switch statement. Signed-off-by: Tom Rix <trix@redhat.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20201101153647.2292322-1-trix@redhat.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tom Rix authored
A semicolon is not needed after a switch statement. Signed-off-by: Tom Rix <trix@redhat.com> Link: https://lore.kernel.org/r/20201101155601.2294374-1-trix@redhat.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tom Rix authored
A semicolon is not needed after a switch statement. Signed-off-by: Tom Rix <trix@redhat.com> Link: https://lore.kernel.org/r/20201101155822.2294856-1-trix@redhat.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Vladimir Oltean says: ==================== Generic TX reallocation for DSA Christian has reported buggy usage of skb_put() in tag_ksz.c, which is only triggerable in real life using his not-yet-published patches for IEEE 1588 timestamping on Micrel KSZ switches. The concrete problem there is that the driver can end up calling skb_put() and exceed the end of the skb data area, because even though it had reallocated the frame once before, it hadn't reallocated it large enough. Christian explained it in more detail here: https://lore.kernel.org/netdev/20201014161719.30289-1-ceggers@arri.de/ https://lore.kernel.org/netdev/20201016200226.23994-1-ceggers@arri.de/ But actually there's a bigger problem, which is that some taggers which get more rarely tested tend to do some shenanigans which are uncaught for the longest time, and in the meanwhile, their code gets copy-pasted into other taggers, creating a mess. For example, the tail tagging driver for Marvell 88E6060 currently reallocates _every_single_frame_ on TX. Is that an obvious indication that nobody is using it? Sure. Is it a good model to follow when developing a new tail tagging driver? No. DSA has all the information it needs in order to simplify the job of a tagger on TX. It knows whether it's a normal or a tail tagger, and what is the protocol overhead it incurs. So this series performs the reallocation centrally. Changes in v3: - Use dev_kfree_skb_any due to potential hardirq context in xmit path. Changes in v2: - Dropped the tx_realloc counters for now, since the patch was pretty controversial and I lack the time at the moment to introduce new UAPI for that. - Do padding for tail taggers irrespective of whether they need to reallocate the skb or not. ==================== Link: https://lore.kernel.org/r/20201101191620.589272-1-vladimir.oltean@nxp.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
Now that we have a central TX reallocation procedure that accounts for the tagger's needed headroom in a generic way, we can remove the skb_cow_head call. Cc: Per Forlin <per.forlin@axis.com> Cc: Oleksij Rempel <linux@rempel-privat.de> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Tested-by: Oleksij Rempel <linux@rempel-privat.de> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
Now that we have a central TX reallocation procedure that accounts for the tagger's needed headroom in a generic way, we can remove the skb_cow_head call. This one is interesting, the DSA tag is 8 bytes on RX and 4 bytes on TX. Because DSA is unaware of asymmetrical tag lengths, the overhead/needed headroom is declared as 8 bytes and therefore 4 bytes larger than it needs to be. If this becomes a problem, and the GSWIP driver can't be converted to a uniform header length, we might need to make DSA aware of separate RX/TX overhead values. Cc: Hauke Mehrtens <hauke@hauke-m.de> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
Now that we have a central TX reallocation procedure that accounts for the tagger's needed headroom in a generic way, we can remove the skb_cow_head call. Similar to the EtherType DSA tagger, the old Marvell tagger can transform an 802.1Q header if present into a DSA tag, so there is no headroom required in that case. But we are ensuring that it exists, regardless (practically speaking, the headroom must be 4 bytes larger than it needs to be). Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
Now that we have a central TX reallocation procedure that accounts for the tagger's needed headroom in a generic way, we can remove the skb_cow_head call. Cc: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
Now that we have a central TX reallocation procedure that accounts for the tagger's needed headroom in a generic way, we can remove the skb_cow_head call. Note that the VLAN code path needs a smaller extra headroom than the regular EtherType DSA path. That isn't a problem, because this tagger declares the larger tag length (8 bytes vs 4) as the protocol overhead, so we are covered in both cases. Cc: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
Now that we have a central TX reallocation procedure that accounts for the tagger's needed headroom in a generic way, we can remove the skb_cow_head call. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
Now that we have a central TX reallocation procedure that accounts for the tagger's needed headroom in a generic way, we can remove the skb_cow_head call. Cc: DENG Qingfang <dqfext@gmail.com> Cc: Sean Wang <sean.wang@mediatek.com> Cc: John Crispin <john@phrozen.org> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
Now that we have a central TX reallocation procedure that accounts for the tagger's needed headroom in a generic way, we can remove the skb_cow_head call. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
Now that we have a central TX reallocation procedure that accounts for the tagger's needed headroom in a generic way, we can remove the skb_cow_head call. Cc: John Crispin <john@phrozen.org> Cc: Alexander Lobakin <alobakin@pm.me> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Christian Eggers authored
The caller (dsa_slave_xmit) guarantees that the frame length is at least ETH_ZLEN and that enough memory for tail tagging is available. Signed-off-by: Christian Eggers <ceggers@arri.de> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Christian Eggers authored
The caller (dsa_slave_xmit) guarantees that the frame length is at least ETH_ZLEN and that enough memory for tail tagging is available. Signed-off-by: Christian Eggers <ceggers@arri.de> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
At the moment, taggers are left with the task of ensuring that the skb headers are writable (which they aren't, if the frames were cloned for TX timestamping, for flooding by the bridge, etc), and that there is enough space in the skb data area for the DSA tag to be pushed. Moreover, the life of tail taggers is even harder, because they need to ensure that short frames have enough padding, a problem that normal taggers don't have. The principle of the DSA framework is that everything except for the most intimate hardware specifics (like in this case, the actual packing of the DSA tag bits) should be done inside the core, to avoid having code paths that are very rarely tested. So provide a TX reallocation procedure that should cover the known needs of DSA today. Note that this patch also gives the network stack a good hint about the headroom/tailroom it's going to need. Up till now it wasn't doing that. So the reallocation procedure should really be there only for the exceptional cases, and for cloned packets which need to be unshared. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Tested-by: Christian Eggers <ceggers@arri.de> # For tail taggers only Tested-by: Kurt Kanzenbach <kurt@linutronix.de> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
YueHaibing authored
Fix smatch warning: net/openvswitch/meter.c:427 ovs_meter_cmd_set() warn: passing zero to 'PTR_ERR' dp_meter_create() never returns NULL, use IS_ERR instead of IS_ERR_OR_NULL to fix this. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Acked-by: Tonghao Zhang <xiangxia.m.yue@gmail.com> Link: https://lore.kernel.org/r/20201031060153.39912-1-yuehaibing@huawei.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
YueHaibing authored
Remove duplicated include. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Link: https://lore.kernel.org/r/20201031024940.29716-1-yuehaibing@huawei.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
YueHaibing authored
Remove duplicated include. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Link: https://lore.kernel.org/r/20201031024744.39020-1-yuehaibing@huawei.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Yuchung Cheng authored
During TCP fast recovery, the congestion control in charge is by default the Proportional Rate Reduction (PRR) unless the congestion control module specified otherwise (e.g. BBR). Previously when tcp_packets_in_flight() is below snd_ssthresh PRR would slow start upon receiving an ACK that 1) cumulatively acknowledges retransmitted data and 2) does not detect further lost retransmission Such conditions indicate the repair is in good steady progress after the first round trip of recovery. Otherwise PRR adopts the packet conservation principle to send only the amount that was newly delivered (indicated by this ACK). This patch generalizes the previous design principle to include also the newly sent data beside retransmission: as long as the delivery is making good progress, both retransmission and new data should be accounted to make PRR more cautious in slow starting. Suggested-by: Matt Mathis <mattmathis@google.com> Suggested-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/r/20201031013412.1973112-1-ycheng@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Vladimir Oltean says: ==================== VLAN improvements for Ocelot switch The main reason why I started this work is that deleting the bridge mdb entries fails when the bridge is deleted, as described here: https://lore.kernel.org/netdev/20201015173355.564934-1-vladimir.oltean@nxp.com/ In short, that happens because the bridge mdb entries are added with a vid of 1, but deletion is attempted with a vid of 0. So the deletion code fails to find the mdb entries. The solution is to make ocelot use a pvid of 0 when it is under a bridge with vlan_filtering 0. When vlan_filtering is 1, the pvid of the bridge is what is programmed into the hardware. The patch series also uncovers more bugs and does some more cleanup, but the above is the main idea behind it. ==================== Link: https://lore.kernel.org/r/20201031102916.667619-1-vladimir.oltean@nxp.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-