- 25 Jul, 2023 11 commits
-
-
Dmitry Antipov authored
Prefer 'strscpy()' over 'strlcpy()' in 'mwifiex_init_hw_fw()'. Signed-off-by: Dmitry Antipov <dmantipov@yandex.ru> Reviewed-by: Brian Norris <briannorris@chromium.org> Signed-off-by: Kalle Valo <kvalo@kernel.org> Link: https://lore.kernel.org/r/20230629085115.180499-1-dmantipov@yandex.ru
-
Bitterblue Smith authored
Theoretically this chip can handle 127 clients. Only compile tested but it should work as well as the RTL8188FU. Signed-off-by: Bitterblue Smith <rtl8821cerfe2@gmail.com> Signed-off-by: Kalle Valo <kvalo@kernel.org> Link: https://lore.kernel.org/r/b2876c56-0ea7-c398-5c9b-635f9f894f2c@gmail.com
-
Bitterblue Smith authored
Theoretically this chip can handle 127 clients. Tested only very briefly but it should work as well as the RTL8188FU. Signed-off-by: Bitterblue Smith <rtl8821cerfe2@gmail.com> Signed-off-by: Kalle Valo <kvalo@kernel.org> Link: https://lore.kernel.org/r/56c9b186-ba9a-8469-652d-ce1709813e9e@gmail.com
-
Bitterblue Smith authored
Theoretically this chip can handle 15 clients. Tested only very briefly but it should work as well as the RTL8188FU. Signed-off-by: Bitterblue Smith <rtl8821cerfe2@gmail.com> Signed-off-by: Kalle Valo <kvalo@kernel.org> Link: https://lore.kernel.org/r/ce04a0a1-df72-ea30-f742-8834e01457f5@gmail.com
-
Bitterblue Smith authored
Theoretically this chip can handle 127 clients. Tested only very briefly but it should work as well as the RTL8188FU. Signed-off-by: Bitterblue Smith <rtl8821cerfe2@gmail.com> Reviewed-by: Ping-Ke Shih <pkshih@realtek.com> Signed-off-by: Kalle Valo <kvalo@kernel.org> Link: https://lore.kernel.org/r/ffcabba5-7e9e-674c-ad03-73646b040b96@gmail.com
-
Yueh-Shun Li authored
Spell "transmits" properly. Found by searching for keyword "tranm". Signed-off-by: Yueh-Shun Li <shamrocklee@posteo.net> Signed-off-by: Kalle Valo <kvalo@kernel.org> Link: https://lore.kernel.org/r/20230622012627.15050-4-shamrocklee@posteo.net
-
Zhang Shurong authored
If there is a failure during kstrtobool_from_user() rtw89_debug_priv_btc_manual_set should return a negative error code instead of returning the count directly. Fix this bug by returning an error code instead of a count after a failed call of the function "kstrtobool_from_user". Moreover I omitted the label "out" with this source code correction. Fixes: e3ec7017 ("rtw89: add Realtek 802.11ax driver") Signed-off-by: Zhang Shurong <zhang_shurong@foxmail.com> Acked-by: Ping-Ke Shih <pkshih@realtek.com> Signed-off-by: Kalle Valo <kvalo@kernel.org> Link: https://lore.kernel.org/r/tencent_1C09B99BD7DA9CAD18B00C8F0F050F540607@qq.com
-
Dmitry Antipov authored
Since all iterators called by 'rtw_iterate_vifs()' never uses 'mac' argument, it may be omitted, and 'struct rtw_vifs_entry' may be simplified accordingly. Signed-off-by: Dmitry Antipov <dmantipov@yandex.ru> Reviewed-by: Ping-Ke Shih <pkshih@realtek.com> Signed-off-by: Kalle Valo <kvalo@kernel.org> Link: https://lore.kernel.org/r/20230628072327.167196-4-dmantipov@yandex.ru
-
Dmitry Antipov authored
Drop no longer used 'bulkout_size' of 'struct rtw_usb' as well as related macros from usb.h and leftovers in 'rtw_usb_parse()'. This follows commit 462c8db6 ("wifi: rtw88: usb: drop now unnecessary URB size check"). Reviewed-by: Ping-Ke Shih <pkshih@realtek.com> Reviewed-by: Sascha Hauer <s.hauer@pengutronix.de> Signed-off-by: Dmitry Antipov <dmantipov@yandex.ru> Signed-off-by: Kalle Valo <kvalo@kernel.org> Link: https://lore.kernel.org/r/20230628072327.167196-3-dmantipov@yandex.ru
-
Dmitry Antipov authored
Drop unused and set but unused 'last_push' of 'struct rtw_txq', 'wireless_set' of 'struct rtw_sta_info', 'usb_txagg_num' of 'struct rtw_usb' and 'n' of 'struct rx_usb_ctrl_block', unused definition of 'struct rtw_timer_list', adjust related code. Reviewed-by: Ping-Ke Shih <pkshih@realtek.com> Signed-off-by: Dmitry Antipov <dmantipov@yandex.ru> Signed-off-by: Kalle Valo <kvalo@kernel.org> Link: https://lore.kernel.org/r/20230628072327.167196-2-dmantipov@yandex.ru
-
Dmitry Antipov authored
Fix possible crash and memory leak on driver unload by deleting TX purge timer and freeing C2H queue in 'rtw_core_deinit()', shrink critical section in the latter by freeing COEX queue out of TX report lock scope. Reviewed-by: Ping-Ke Shih <pkshih@realtek.com> Signed-off-by: Dmitry Antipov <dmantipov@yandex.ru> Signed-off-by: Kalle Valo <kvalo@kernel.org> Link: https://lore.kernel.org/r/20230628072327.167196-1-dmantipov@yandex.ru
-
- 24 Jul, 2023 1 commit
-
-
Eric Dumazet authored
IPv6 inet sockets are supposed to have a "struct ipv6_pinfo" field at the end of their definition, so that inet6_sk_generic() can derive from socket size the offset of the "struct ipv6_pinfo". This is very fragile, and prevents adding bigger alignment in sockets, because inet6_sk_generic() does not work if the compiler adds padding after the ipv6_pinfo component. We are currently working on a patch series to reorganize TCP structures for better data locality and found issues similar to the one fixed in commit f5d54767 ("tcp: fix tcp_inet6_sk() for 32bit kernels") Alternative would be to force an alignment on "struct ipv6_pinfo", greater or equal to __alignof__(any ipv6 sock) to ensure there is no padding. This does not look great. v2: fix typo in mptcp_proto_v6_init() (Paolo) Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Chao Wu <wwchao@google.com> Cc: Wei Wang <weiwan@google.com> Cc: Coco Li <lixiaoyan@google.com> Cc: YiFei Zhu <zhuyifei@google.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 23 Jul, 2023 9 commits
-
-
Patrick Rohr authored
This change adds a new sysctl accept_ra_min_rtr_lft to specify the minimum acceptable router lifetime in an RA. If the received RA router lifetime is less than the configured value (and not 0), the RA is ignored. This is useful for mobile devices, whose battery life can be impacted by networks that configure RAs with a short lifetime. On such networks, the device should never gain IPv6 provisioning and should attempt to drop RAs via hardware offload, if available. Signed-off-by: Patrick Rohr <prohr@google.com> Cc: Maciej Żenczykowski <maze@google.com> Cc: Lorenzo Colitti <lorenzo@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
justinstitt@google.com authored
`strncpy` is deprecated for use on NUL-terminated destination strings [1]. Even call sites utilizing length-bounded destination buffers should switch over to using `strtomem` or `strtomem_pad`. In this case, however, the compiler is unable to determine the size of the `data` buffer which renders `strtomem` unusable. Due to this, `strscpy` should be used. It should be noted that most call sites already zero-initialize the destination buffer. However, I've opted to use `strscpy_pad` to maintain the same exact behavior that `strncpy` produced (zero-padded tail up to `len`). Also see [3]. [1]: www.kernel.org/doc/html/latest/process/deprecated.html#strncpy-on-nul-terminated-strings [2]: elixir.bootlin.com/linux/v6.3/source/net/ethtool/ioctl.c#L1944 [3]: manpages.debian.org/testing/linux-manual-4.8/strscpy.9.en.html Link: https://github.com/KSPP/linux/issues/90Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Justin Stitt <justinstitt@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Anjali Kulkarni says: ==================== Process connector bug fixes & enhancements Oracle DB is trying to solve a performance overhead problem it has been facing for the past 10 years and using this patch series, we can fix this issue. Oracle DB runs on a large scale with 100000s of short lived processes, starting up and exiting quickly. A process monitoring DB daemon which tracks and cleans up after processes that have died without a proper exit needs notifications only when a process died with a non-zero exit code (which should be rare). Due to the pmon architecture, which is distributed, each process is independent and has minimal interaction with pmon. Hence fd based solutions to track a process's spawning and exit cannot be used. Pmon needs to detect the abnormal death of a process so it can cleanup after. Currently it resorts to checking /proc every few seconds. Other methods we tried like using system call to reduce the above overhead were not accepted upstream. With this change, we add event based filtering to proc connector module so that DB can only listen to the events it is interested in. A new event type PROC_EVENT_NONZERO_EXIT is added, which is only sent by kernel to a listening application when any process exiting has a non-zero exit status. This change will give Oracle DB substantial performance savings - it takes 50ms to scan about 8K PIDs in /proc, about 500ms for 100K PIDs. DB does this check every 3 secs, so over an hour we save 10secs for 100K PIDs. With this, a client can register to listen for only exit or fork or a mix or all of the events. This greatly enhances performance - currently, we need to listen to all events, and there are 9 different types of events. For eg. handling 3 types of events - 8K-forks + 8K-exits + 8K-execs takes 200ms, whereas handling 2 types - 8K-forks + 8K-exits takes about 150ms, and handling just one type - 8K exits takes about 70ms. Measuring the time using pidfds for monitoring 8K process exits took 4 times longer - 200ms, as compared to 70ms using only exit notifications of proc connector. Hence, we cannot use pidfd for our use case. This kind of a new event could also be useful to other applications like Google's lmkd daemon, which needs a killed process's exit notification. This patch series is organized as follows - Patch 1 : Needed for patch 3 to work. Patch 2 : Needed for patch 3 to work. Patch 3 : Fixes some bugs in proc connector, details in the patch. Patch 4 : Adds event based filtering for performance enhancements. Patch 5 : Allow non-root users access to proc connector events. Patch 6 : Selftest code for proc connector. v9->v10 changes: - Rebased to net-next, re-compiled and re-tested. v8->v9 changes: - Added sha1 ("title") of reversed patch as suggested by Eric Dumazet. v7->v8 changes: - Fixed an issue pointed by Liam Howlett in v7. v6->v7 changes: - Incorporated Liam Howlett's comments on v6 - Incorporated Kalesh Anakkur Purayil's comments v5->v6 changes: - Incorporated Liam Howlett's comments - Removed FILTER define from proc_filter.c and added a "-f" run-time option to run new filter code. - Made proc_filter.c a selftest in tools/testing/selftests/connector v4->v5 changes: - Change the cover letter - Fix a small issue in proc_filter.c v3->v4 changes: - Fix comments by Jakub Kicinski to incorporate root access changes within bind call of connector v2->v3 changes: - Fix comments by Jakub Kicinski to separate netlink (patch 2) (after layering) from connector fixes (patch 3). - Minor fixes suggested by Jakub. - Add new multicast group level permissions check at netlink layer. Split this into netlink & connector layers (patches 6 & 7) v1->v2 changes: - Fix comments by Jakub Kicinski to keep layering within netlink and update kdocs. - Move non-root users access patch last in series so remaining patches can go in first. v->v1 changes: - Changed commit log in patch 4 as suggested by Christian Brauner - Changed patch 4 to make more fine grained access to non-root users - Fixed warning in cn_proc.c, Reported-by: kernel test robot <lkp@intel.com> - Fixed some existing warnings in cn_proc.c ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Anjali Kulkarni authored
Run as ./proc_filter -f to run new filter code. Run without "-f" to run usual proc connector code without the new filtering code. Signed-off-by: Anjali Kulkarni <anjali.k.kulkarni@oracle.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Anjali Kulkarni authored
There were a couple of reasons for not allowing non-root users access initially - one is there was some point no proper receive buffer management in place for netlink multicast. But that should be long fixed. See link below for more context. Second is that some of the messages may contain data that is root only. But this should be handled with a finer granularity, which is being done at the protocol layer. The only problematic protocols are nf_queue and the firewall netlink. Hence, this restriction for non-root access was relaxed for NETLINK_ROUTE initially: https://lore.kernel.org/all/20020612013101.A22399@wotan.suse.de/ This restriction has also been removed for following protocols: NETLINK_KOBJECT_UEVENT, NETLINK_AUDIT, NETLINK_SOCK_DIAG, NETLINK_GENERIC, NETLINK_SELINUX. Since process connector messages are not sensitive (process fork, exit notifications etc.), and anyone can read /proc data, we can allow non-root access here. However, since process event notification is not the only consumer of NETLINK_CONNECTOR, we can make this change even more fine grained than the protocol level, by checking for multicast group within the protocol. Allow non-root access for NETLINK_CONNECTOR via NL_CFG_F_NONROOT_RECV but add new bind function cn_bind(), which allows non-root access only for CN_IDX_PROC multicast group. Signed-off-by: Anjali Kulkarni <anjali.k.kulkarni@oracle.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Anjali Kulkarni authored
This patch adds the capability to filter messages sent by the proc connector on the event type supplied in the message from the client to the connector. The client can register to listen for an event type given in struct proc_input. This event based filteting will greatly enhance performance - handling 8K exits takes about 70ms, whereas 8K-forks + 8K-exits takes about 150ms & handling 8K-forks + 8K-exits + 8K-execs takes 200ms. There are currently 9 different types of events, and we need to listen to all of them. Also, measuring the time using pidfds for monitoring 8K process exits took much longer - 200ms, as compared to 70ms using only exit notifications of proc connector. We also add a new event type - PROC_EVENT_NONZERO_EXIT, which is only sent by kernel to a listening application when any process exiting, has a non-zero exit status. This will help the clients like Oracle DB, where a monitoring process wants notfications for non-zero process exits so it can cleanup after them. This kind of a new event could also be useful to other applications like Google's lmkd daemon, which needs a killed process's exit notification. The patch takes care that existing clients using old mechanism of not sending the event type work without any changes. cn_filter function checks to see if the event type being notified via proc connector matches the event type requested by client, before sending(matches) or dropping(does not match) a packet. Signed-off-by: Anjali Kulkarni <anjali.k.kulkarni@oracle.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Anjali Kulkarni authored
The current proc connector code has the foll. bugs - if there are more than one listeners for the proc connector messages, and one of them deregisters for listening using PROC_CN_MCAST_IGNORE, they will still get all proc connector messages, as long as there is another listener. Another issue is if one client calls PROC_CN_MCAST_LISTEN, and another one calls PROC_CN_MCAST_IGNORE, then both will end up not getting any messages. This patch adds filtering and drops packet if client has sent PROC_CN_MCAST_IGNORE. This data is stored in the client socket's sk_user_data. In addition, we only increment or decrement proc_event_num_listeners once per client. This fixes the above issues. cn_release is the release function added for NETLINK_CONNECTOR. It uses the newly added netlink_release function added to netlink_sock. It will free sk_user_data. Signed-off-by: Anjali Kulkarni <anjali.k.kulkarni@oracle.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Anjali Kulkarni authored
A new function netlink_release is added in netlink_sock to store the protocol's release function. This is called when the socket is deleted. This can be supplied by the protocol via the release function in netlink_kernel_cfg. This is being added for the NETLINK_CONNECTOR protocol, so it can free it's data when socket is deleted. Signed-off-by: Anjali Kulkarni <anjali.k.kulkarni@oracle.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Acked-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Anjali Kulkarni authored
To use filtering at the connector & cn_proc layers, we need to enable filtering in the netlink layer. This reverses the patch which removed netlink filtering - commit ID for that patch: 549017aa (netlink: remove netlink_broadcast_filtered). Signed-off-by: Anjali Kulkarni <anjali.k.kulkarni@oracle.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Acked-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 22 Jul, 2023 7 commits
-
-
Jakub Kicinski authored
Jakub Kicinski says: ==================== net: page_pool: remove page_pool_release_page() page_pool_return_page() is a historic artefact from before recycling of pages attached to skbs was supported. Theoretical uses for it may be thought up but in practice all existing users can be converted to use skb_mark_for_recycle() instead. This code was previously posted as part of the memory provider RFC. https://lore.kernel.org/all/20230707183935.997267-1-kuba@kernel.org/ ==================== Link: https://lore.kernel.org/r/20230720010409.1967072-1-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Now that page_pool_release_page() is not exported we can merge it with page_pool_return_page(). I believe that the "Do not replace this with page_pool_return_page()" comment was there in case page_pool_return_page() was not inlined, to avoid two function calls. Acked-by: Jesper Dangaard Brouer <hawk@kernel.org> Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com> Link: https://lore.kernel.org/r/20230720010409.1967072-5-kuba@kernel.orgReviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
There seems to be no user calling page_pool_release_page() for legit reasons, all the users simply haven't been converted to skb-based recycling, yet. Previous changes converted them. Update the docs, and unexport the function. Link: https://lore.kernel.org/r/20230720010409.1967072-4-kuba@kernel.orgReviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
stmmac removes pages from the page pool after attaching them to skbs. Use page recycling instead. skb heads are always copied, and pages are always from page pool in this driver. We could as well mark all allocated skbs for recycling. Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com> Link: https://lore.kernel.org/r/20230720010409.1967072-3-kuba@kernel.orgReviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
tsnep builds an skb with napi_build_skb() and then calls page_pool_release_page() for the page in which that skb's head sits. Use recycling instead, recycling of heads works just fine. Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com> Link: https://lore.kernel.org/r/20230720010409.1967072-2-kuba@kernel.orgReviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
Currently, if cmd in the split ops array is of lower value than the previous one, genl_validate_ops() continues to do the checks as if the values are equal. This may result in non-obvious WARN_ON() hit in these check. Instead, check the incorrect ordering explicitly and put a WARN_ON() in case it is broken. Signed-off-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20230720111354.562242-1-jiri@resnulli.usSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Marc Kleine-Budde authored
Linus seems to like the MAINTAINERS file sorted, see c192ac73 ("MAINTAINERS 2: Electric Boogaloo"). Since this is currently not the case, restore the sort order. Fixes: 3abf3d15 ("MAINTAINERS: ASP 2.0 Ethernet driver maintainers") Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de> Acked-by: Justin Chen <justin.chen@broadcom.com> Reviewed-by: Randy Dunlap <rdunlap@infradead.org> Acked-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://lore.kernel.org/r/20230720151107.679668-1-mkl@pengutronix.deSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 21 Jul, 2023 12 commits
-
-
David S. Miller authored
Hariprasad Kelam says: ==================== octeontx2-pf: support Round Robin scheduling octeontx2 and CN10K silicons support Round Robin scheduling. When multiple traffic flows reach transmit level with the same priority, with Round Robin scheduling traffic flow with the highest quantum value is picked. With this support, the user can add multiple classes with the same priority and different quantum in htb offload. This series of patches adds support for the same. Patch1: implement transmit schedular allocation algorithm as preparation for support round robin scheduling. Patch2: Allow quantum parameter in HTB offload mode. Patch3: extends octeontx2 htb offload support for Round Robin scheduling Patch4: extend QOS documentation for Round Robin scheduling Hariprasad Kelam (1): docs: octeontx2: extend documentation for Round Robin scheduling Naveen Mamindlapalli (3): octeontx2-pf: implement transmit schedular allocation algorithm sch_htb: Allow HTB quantum parameter in offload mode octeontx2-pf: htb offload support for Round Robin scheduling --- v4 * update classid values in documentation. v3 * 1. update QOS documentation for round robin scheduling 2. added out of bound checks for quantum parameter v2 * change data type of otx2_index_used to reduce size of structure otx2_qos_cfg ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hariprasad Kelam authored
Add example tc-htb commands for Round robin scheduling Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Naveen Mamindlapalli authored
When multiple traffic flows reach Transmit level with the same priority, with Round robin scheduling traffic flow with the highest quantum value is picked. With this support, the user can add multiple classes with the same priority and different quantum. This patch does necessary changes to support the same. Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com> Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Naveen Mamindlapalli authored
The current implementation of HTB offload returns the EINVAL error for quantum parameter. This patch removes the error returning checks for 'quantum' parameter and populates its value to tc_htb_qopt_offload structure such that driver can use the same. Add quantum parameter check in mlx5 driver, as mlx5 devices are not capable of supporting the quantum parameter when htb offload is used. Report error if quantum parameter is set to a non-default value. Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com> Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Naveen Mamindlapalli authored
unlike strict priority, where number of classes are limited to max 8, there is no restriction on the number of dwrr child nodes unless the count increases the max number of child nodes supported. Hardware expects strict priority transmit schedular indexes mapped to their priority. This patch adds defines transmit schedular allocation algorithm such that the above requirement is honored. Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com> Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Petr Machata says: ==================== mlxsw: Permit enslavement to netdevices with uppers The mlxsw driver currently makes the assumption that the user applies configuration in a bottom-up manner. Thus netdevices need to be added to the bridge before IP addresses are configured on that bridge or SVI added on top of it. Enslaving a netdevice to another netdevice that already has uppers is in fact forbidden by mlxsw for this reason. Despite this safety, it is rather easy to get into situations where the offloaded configuration is just plain wrong. As an example, take a front panel port, configure an IP address: it gets a RIF. Now enslave the port to the bridge, and the RIF is gone. Remove the port from the bridge again, but the RIF never comes back. There is a number of similar situations, where changing the configuration there and back utterly breaks the offload. Similarly, detaching a front panel port from a configured topology means unoffloading of this whole topology -- VLAN uppers, next hops, etc. Attaching the port back is then not permitted at all. If it were, it would not result in a working configuration, because much of mlxsw is written to react to changes in immediate configuration. There is nothing that would go visit netdevices in the attached-to topology and offload existing routes and VLAN memberships, for example. In this patchset, introduce a number of replays to be invoked so that this sort of post-hoc offload is supported. Then remove the vetoes that disallowed enslavement of front panel ports to other netdevices with uppers. The patchset progresses as follows: - In patch #1, fix an issue in the bridge driver. To my knowledge, the issue could not have resulted in a buggy behavior previously, and thus is packaged with this patchset instead of being sent separately to net. - In patch #2, add a new helper to the switchdev code. - In patch #3, drop mlxsw selftests that will not be relevant after this patchset anymore. - Patches #4, #5, #6, #7 and #8 prepare the codebase for smoother introduction of the rest of the code. - Patches #9, #10, #11, #12, #13 and #14 replay various aspects of upper configuration when a front panel port is introduced into a topology. Individual patches take care of bridge and LAG RIF memberships, switchdev replay, nexthop and neighbors replay, and MACVLAN offload. - Patches #15 and #16 introduce RIFs for newly-relevant netdevices when a front panel port is enslaved (in which case all uppers are newly relevant), or, respectively, deslaved (in which case the newly-relevant netdevice is the one being deslaved). - Up until this point, the introduced scaffolding was not really used, because mlxsw still forbids enslavement of mlxsw netdevices to uppers with uppers. In patch #17, this condition is finally relaxed. A sizable selftest suite is available to test all this new code. That will be sent in a separate patchset. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
Enslaving of front panel ports (and their uppers) to netdevices that already have uppers is currently forbidden. In the previous patches, a number of replays have been added. Those ensure that various bits of state, such as next hops or switchdev objects, are offloaded when they become relevant due to a mlxsw lower being introduced into the topology. However the act of actually, for example, enslaving a front-panel port to a bridge with uppers, has been vetoed so far. In this patch, remove the vetoes and permit the operation. mlxsw currently validates creation of "interesting" uppers. Thus creating VLAN netdevices on top of 802.1ad bridges is forbidden if the bridge has an mlxsw lower, but permitted in general. This validation code never gets run when a port is introduced as a lower of an existing netdevice structure. Thus when enslaving an mlxsw netdevice to netdevices with uppers, invoke the PRECHANGEUPPER event handler for each netdevice above the one that the front panel port is being enslaved to. This way the tower of netdevices above the attachment point is validated. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
When a netdevice is removed from a bridge or a LAG, and it has an IP address, it should join the router and gain a RIF. Do that by replaying address addition event on the netdevice. When handling deslavement of LAG or its upper from a bridge device, the replay should be done after all the lowers of the LAG have left the bridge. Thus these scenarios are handled by passing replay_deslavement of false, and by invoking, after the lowers have been processed, a new helper, mlxsw_sp_netdevice_post_lag_event(), which does the per-LAG / -upper handling, and in particular invokes the replay. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
Enslaving of front panel ports (and their uppers) to netdevices that already have uppers is currently forbidden. When this is permitted, any uppers with IP addresses need to have the NETDEV_UP inetaddr event replayed, so that any RIFs are created. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
As neighbours are created, mlxsw is involved through the netevent notifications. When at the time there is no RIF for a given neighbour, the notification is not acted upon. When the RIF is later created, these outstanding neighbours are left unoffloaded and cause traffic to go through the SW datapath. In order to fix this issue, as a RIF is created, walk the ARP and ND tables and find neighbours for the netdevice that represents the RIF. Then schedule neighbour work for them, allowing them to be offloaded. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
If IP address is added to a MACVLAN netdevice, the effect is of configuring VRRP on the RIF for the netdevice linked to the MACVLAN. Because the MACVLAN offload is tied to existence of a RIF at the linked netdevice, adding a MACVLAN is currently not allowed until a RIF is present. If this requirement stays, it will never be possible to attach a first port into a topology that involves a MACVLAN. Thus topologies would need to be built in a certain order, which is impractical. Additionally, IP address removal, which leads to disappearance of the RIF that the MACVLAN depends on, cannot be vetoed. Thus even as things stand now it is possible to get to a state where a MACVLAN netdevice exists without a RIF, despite having mlxsw lowers. And once the MACVLAN is un-offloaded due to RIF getting destroyed, recreating the RIF does not bring it back. In this patch, accept that MACVLAN can be created out of order and support that use case. One option would seem to be to simply recognize MACVLAN netdevices as "interesting", and let the existing replay mechanisms take care of the offload. However, that does not address the necessity to reoffload MACVLAN once a RIF is created. Thus add a new replay hook, symmetrical to mlxsw_sp_rif_macvlan_flush(), called mlxsw_sp_rif_macvlan_replay(), which instead of unwinding the existing offloads, applies the configuration as if the netdevice were created just now. Additionally, remove all vetoes and warning messages that checked for presence of a RIF at the linked device. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
As RIF is created, refresh each netxhop group tracked at the CRIF for which the RIF was created. Note that nothing needs to be done for IPIP nexthops. The RIF for these is either available from the get-go, or will never be available, so no after the fact offloading needs to be done. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-