- 29 Nov, 2021 12 commits
-
-
Stephan Gerhold authored
The BAM Data Multiplexer provides access to the network data channels of modems integrated into many older Qualcomm SoCs, e.g. Qualcomm MSM8916 or MSM8974. It is built using a simple protocol layer on top of a DMA engine (Qualcomm BAM) and bidirectional interrupts to coordinate power control. The device tree node combines the incoming interrupt with the outgoing interrupts (smem-states) as well as the two DMA channels, which allows the BAM-DMUX driver to request all necessary resources. Signed-off-by: Stephan Gerhold <stephan@gerhold.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Guangbin Huang says: ==================== net: vxlan: add macro definition for number of IANA VXLAN-GPE port This series add macro definition for number of IANA VXLAN-GPE port for cleanup. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hao Chen authored
This patch uses macro IANA_VXLAN_GPE_UDP_PORT to replace number 4790 for cleanup. Signed-off-by: Hao Chen <chenhao288@hisilicon.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hao Chen authored
Add macro definition for number of IANA VXLAN-GPE port for generic use. Signed-off-by: Hao Chen <chenhao288@hisilicon.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sebastian Andrzej Siewior authored
The writer acquires dev_base_lock with disabled bottom halves. The reader can acquire dev_base_lock without disabling bottom halves because there is no writer in softirq context. On PREEMPT_RT the softirqs are preemptible and local_bh_disable() acts as a lock to ensure that resources, that are protected by disabling bottom halves, remain protected. This leads to a circular locking dependency if the lock acquired with disabled bottom halves (as in write_lock_bh()) and somewhere else with enabled bottom halves (as by read_lock() in netstat_show()) followed by disabling bottom halves (cxgb_get_stats() -> t4_wr_mbox_meat_timeout() -> spin_lock_bh()). This is the reverse locking order. All read_lock() invocation are from sysfs callback which are not invoked from softirq context. Therefore there is no need to disable bottom halves while acquiring a write lock. Acquire the write lock of dev_base_lock without disabling bottom halves. Reported-by: Pei Zhang <pezhang@redhat.com> Reported-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Parkin authored
Previously commit e02d494d ("l2tp: Convert rwlock to RCU") converted most, but not all, rwlock instances in the l2tp subsystem to RCU. The remaining rwlock protects the per-tunnel hashlist of sessions which is used for session lookups in the UDP-encap data path. Convert the remaining rwlock to rcu to improve performance of UDP-encap tunnels. Note that the tunnel and session, which both live on RCU-protected lists, use slightly different approaches to incrementing their refcounts in the various getter functions. The tunnel has to use refcount_inc_not_zero because the tunnel shutdown process involves dropping the refcount to zero prior to synchronizing RCU readers (via. kfree_rcu). By contrast, the session shutdown removes the session from the list(s) it is on, synchronizes with readers, and then decrements the session refcount. Since the getter functions increment the session refcount with the RCU read lock held we prevent getters seeing a zero session refcount, and therefore don't need to use refcount_inc_not_zero. Signed-off-by: Tom Parkin <tparkin@katalix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Maxime Chevallier says: ==================== net: mvneta: mqprio cleanups and shaping support This is the second version of the series that adds some improvements to the existing mqprio implementation in mvneta, and adds support for egress shaping offload. The first 3 patches are some minor cleanups, such as using the tc_mqprio_qopt_offload structure to get access to more offloading options, cleaning the logic to detect whether or not we should offload mqprio setting, and allowing to have a 1 to N mapping between TCs and queues. The last patch adds traffic shaping offload, using mvneta's per-queue token buckets, allowing to limit rates from 10Kbps up to 5Gbps with 10Kbps increments. This was tested only on an Armada 3720, with traffic up to 2.5Gbps. Changes since V1 fixes the build for 32bits kernels, using the right div helpers as suggested by Jakub. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
The mvneta controller is able to do some tocken-bucket per-queue traffic shaping. This commit adds support for setting these using the TC mqprio interface. The token-bucket parameters are customisable, but the current implementation configures them to have a 10kbps resolution for the rate limitation, since it allows to cover the whole range of max_rate values from 10kbps to 5Gbps with 10kbps increments. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
The current mqprio implementation assumed that we are only using one queue per TC. Use the offset and count parameters to allow using multiple queues per TC. In that case, the controller will use a standard round-robin algorithm to pick queues assigned to the same TC, with the same priority. This only applies to VLAN priorities in ingress traffic, each TC corresponding to a vlan priority. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
The qopt->hw flag is set by the TC code according to the offloading mode asked by user. Don't force-set it in the driver, but instead read it to make sure we do what's asked. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
The struct tc_mqprio_qopt_offload is a container for struct tc_mqprio_qopt, that allows passing extra parameters, such as traffic shaping. This commit converts the current mqprio code to that new struct. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yang Yingliang authored
Use devm_ioremap() instead of ioremap() to avoid iounmap() missing. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 27 Nov, 2021 27 commits
-
-
Jakub Kicinski authored
Kuniyuki Iwashima says: ==================== af_unix: Replace unix_table_lock with per-hash locks. The hash table of AF_UNIX sockets is protected by a single big lock, unix_table_lock. This series replaces it with small per-hash locks. 1st - 2nd : Misc refactoring 3rd - 8th : Separate BSD/abstract address logics 9th - 11th : Prep to save a hash in each socket 12th : Replace the big lock 13th : Speed up autobind() Note to maintainers: The 12th patch adds two kinds of Sparse warnings on patchwork: about unix_table_double_lock/unlock() We can avoid this by adding two apparent acquires/releases annotations, but there are the same kinds of warnings about unix_state_double_lock(). about unix_next_socket() and unix_seq_stop() (/proc/net/unix) This is because Sparse does not understand logic in unix_next_socket(), which leaves a spin lock held until it returns NULL. Also, tcp_seq_stop() causes a warning for the same reason. These warnings seem reasonable, but let me know if there is any better way. Please see [0] for details. [0]: https://lore.kernel.org/netdev/20211117001611.74123-1-kuniyu@amazon.co.jp/ ==================== Link: https://lore.kernel.org/r/20211124021431.48956-1-kuniyu@amazon.co.jpSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Kuniyuki Iwashima authored
When we bind an AF_UNIX socket without a name specified, the kernel selects an available one from 0x00000 to 0xFFFFF. unix_autobind() starts searching from a number in the 'static' variable and increments it after acquiring two locks. If multiple processes try autobind, they obtain the same lock and check if a socket in the hash list has the same name. If not, one process uses it, and all except one end up retrying the _next_ number (actually not, it may be incremented by the other processes). The more we autobind sockets in parallel, the longer the latency gets. We can avoid such a race by searching for a name from a random number. These show latency in unix_autobind() while 64 CPUs are simultaneously autobind-ing 1024 sockets for each. Without this patch: usec : count distribution 0 : 1176 |*** | 2 : 3655 |*********** | 4 : 4094 |************* | 6 : 3831 |************ | 8 : 3829 |************ | 10 : 3844 |************ | 12 : 3638 |*********** | 14 : 2992 |********* | 16 : 2485 |******* | 18 : 2230 |******* | 20 : 2095 |****** | 22 : 1853 |***** | 24 : 1827 |***** | 26 : 1677 |***** | 28 : 1473 |**** | 30 : 1573 |***** | 32 : 1417 |**** | 34 : 1385 |**** | 36 : 1345 |**** | 38 : 1344 |**** | 40 : 1200 |*** | With this patch: usec : count distribution 0 : 1855 |****** | 2 : 6464 |********************* | 4 : 9936 |******************************** | 6 : 12107 |****************************************| 8 : 10441 |********************************** | 10 : 7264 |*********************** | 12 : 4254 |************** | 14 : 2538 |******** | 16 : 1596 |***** | 18 : 1088 |*** | 20 : 800 |** | 22 : 670 |** | 24 : 601 |* | 26 : 562 |* | 28 : 525 |* | 30 : 446 |* | 32 : 378 |* | 34 : 337 |* | 36 : 317 |* | 38 : 314 |* | 40 : 298 | | Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Kuniyuki Iwashima authored
The hash table of AF_UNIX sockets is protected by the single lock. This patch replaces it with per-hash locks. The effect is noticeable when we handle multiple sockets simultaneously. Here is a test result on an EC2 c5.24xlarge instance. It shows latency (under 10us only) in unix_insert_unbound_socket() while 64 CPUs creating 1024 sockets for each in parallel. Without this patch: nsec : count distribution 0 : 179 | | 500 : 3021 |********* | 1000 : 6271 |******************* | 1500 : 6318 |******************* | 2000 : 5828 |***************** | 2500 : 5124 |*************** | 3000 : 4426 |************* | 3500 : 3672 |*********** | 4000 : 3138 |********* | 4500 : 2811 |******** | 5000 : 2384 |******* | 5500 : 2023 |****** | 6000 : 1954 |***** | 6500 : 1737 |***** | 7000 : 1749 |***** | 7500 : 1520 |**** | 8000 : 1469 |**** | 8500 : 1394 |**** | 9000 : 1232 |*** | 9500 : 1138 |*** | 10000 : 994 |*** | With this patch: nsec : count distribution 0 : 1634 |**** | 500 : 13170 |****************************************| 1000 : 13156 |*************************************** | 1500 : 9010 |*************************** | 2000 : 6363 |******************* | 2500 : 4443 |************* | 3000 : 3240 |********* | 3500 : 2549 |******* | 4000 : 1872 |***** | 4500 : 1504 |**** | 5000 : 1247 |*** | 5500 : 1035 |*** | 6000 : 889 |** | 6500 : 744 |** | 7000 : 634 |* | 7500 : 498 |* | 8000 : 433 |* | 8500 : 355 |* | 9000 : 336 |* | 9500 : 284 | | 10000 : 243 | | Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Kuniyuki Iwashima authored
To replace unix_table_lock with per-hash locks in the next patch, we need to save a hash in each socket because /proc/net/unix or BPF prog iterate sockets while holding a hash table lock and release it later in a different function. Currently, we store a real/pseudo hash in struct unix_address. However, we do not allocate it to unbound sockets, nor should we do just for that. For this purpose, we can use sk_hash. Then, we no longer use the hash field in struct unix_address and can remove it. Also, this patch does - rename unix_insert_socket() to unix_insert_unbound_socket() - remove the redundant list argument from __unix_insert_socket() and unix_insert_unbound_socket() - use 'unsigned int' instead of 'unsigned' in __unix_set_addr_hash() - remove 'inline' from unix_remove_socket() and unix_insert_unbound_socket(). Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Kuniyuki Iwashima authored
This patch adds three helper functions that calculate hashes for unbound sockets and bound sockets with BSD/abstract addresses. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Kuniyuki Iwashima authored
In BSD and abstract address cases, we store sockets in the hash table with keys between 0 and UNIX_HASH_SIZE - 1. However, the hash saved in a socket varies depending on its address type; sockets with BSD addresses always have UNIX_HASH_SIZE in their unix_sk(sk)->addr->hash. This is just for the UNIX_ABSTRACT() macro used to check the address type. The difference of the saved hashes comes from the first byte of the address in the first place. So, we can test it directly. Then we can keep a real hash in each socket and replace unix_table_lock with per-hash locks in the later patch. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Kuniyuki Iwashima authored
To terminate address with '\0' in unix_bind_bsd(), we add unix_create_addr() and call it in unix_bind_bsd() and unix_bind_abstract(). Also, unix_bind_abstract() does not return -EEXIST. Only kern_path_create() and vfs_mknod() in unix_bind_bsd() can return it, so we move the last error check in unix_bind() to unix_bind_bsd(). Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Kuniyuki Iwashima authored
This patch removes unix_mkname() and postpones calculating a hash to unix_bind_abstract(). Some BSD stuffs still remain in unix_bind() though, the next patch packs them into unix_bind_bsd(). Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Kuniyuki Iwashima authored
We should not call unix_mkname() before unix_find_other() and instead do the same thing where necessary based on the address type: - terminating the address with '\0' in unix_find_bsd() - calculating the hash in unix_find_abstract(). Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Kuniyuki Iwashima authored
unix_mkname() tests socket address length and family and does some processing based on the address type. It is called in the early stage, and therefore some instructions are redundant and can end up in vain. The address length/family tests are done twice in unix_bind(). Also, the address type is rechecked later in unix_bind() and unix_find_other(), where we can do the same processing. Moreover, in the BSD address case, the hash is set to 0 but never used and confusing. This patch moves the address tests out of unix_mkname(), and the following patches move the other part into appropriate places and remove unix_mkname() finally. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Kuniyuki Iwashima authored
We can return an error as a pointer and need not pass an additional argument to unix_find_other(). Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Kuniyuki Iwashima authored
As done in the commit fa42d910 ("unix_bind(): take BSD and abstract address cases into new helpers"), this patch moves BSD and abstract address cases from unix_find_other() into unix_find_bsd() and unix_find_abstract(). Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Kuniyuki Iwashima authored
We do not use struct socket in unix_autobind() and pass struct sock to unix_bind_bsd() and unix_bind_abstract(). Let's pass it to unix_autobind() as well. Also, this patch fixes these errors by checkpatch.pl. ERROR: do not use assignment in if condition #1795: FILE: net/unix/af_unix.c:1795: + if (test_bit(SOCK_PASSCRED, &sock->flags) && !u->addr CHECK: Logical continuations should be on the previous line #1796: FILE: net/unix/af_unix.c:1796: + if (test_bit(SOCK_PASSCRED, &sock->flags) && !u->addr + && (err = unix_autobind(sock)) != 0) Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Kuniyuki Iwashima authored
The length of the AF_UNIX socket address contains an offset to the member sun_path of struct sockaddr_un. Currently, the preceding member is just sun_family, and its type is sa_family_t and resolved to short. Therefore, the offset is represented by sizeof(short). However, it is not clear and fragile to changes in struct sockaddr_storage or sockaddr_un. This commit makes it clear and robust by rewriting sizeof() with offsetof(). Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Xin Long authored
The same optimization as the one in commit cc0be1ad ("net: bridge: Slightly optimize 'find_portno()'") is needed for the 'changed' bitmap in __br_vlan_set_default_pvid(). Signed-off-by: Xin Long <lucien.xin@gmail.com> Link: https://lore.kernel.org/r/4e35f415226765e79c2a11d2c96fbf3061c486e2.1637782773.git.lucien.xin@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tonghao Zhang authored
The netdev (e.g. ifb, bareudp), which not support ethtool ops (e.g. .get_drvinfo), we can use the rtnl kind as a default name. ifb netdev may be created by others prefix, not ifbX. Cc: Arnd Bergmann <arnd@arndb.de> Cc: Hao Chen <chenhao288@hisilicon.com> Cc: Heiner Kallweit <hkallweit1@gmail.com> Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org> Cc: Danielle Ratson <danieller@nvidia.com> Cc: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Link: https://lore.kernel.org/r/20211125163049.84970-1-xiangxia.m.yue@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Nikolay Aleksandrov says: ==================== selftests: net: bridge: vlan multicast tests This patch-set adds selftests for the new vlan multicast options that were recently added. Most of the tests check for default values, changing options and try to verify that the changes actually take effect. The last test checks if the dependency between vlan_filtering and mcast_vlan_snooping holds. The rest are pretty self-explanatory. TEST: Vlan multicast snooping enable [ OK ] TEST: Vlan global options existence [ OK ] TEST: Vlan mcast_snooping global option default value [ OK ] TEST: Vlan 10 multicast snooping control [ OK ] TEST: Vlan mcast_querier global option default value [ OK ] TEST: Vlan 10 multicast querier enable [ OK ] TEST: Vlan 10 tagged IGMPv2 general query sent [ OK ] TEST: Vlan 10 tagged MLD general query sent [ OK ] TEST: Vlan mcast_igmp_version global option default value [ OK ] TEST: Vlan mcast_mld_version global option default value [ OK ] TEST: Vlan 10 mcast_igmp_version option changed to 3 [ OK ] TEST: Vlan 10 tagged IGMPv3 general query sent [ OK ] TEST: Vlan 10 mcast_mld_version option changed to 2 [ OK ] TEST: Vlan 10 tagged MLDv2 general query sent [ OK ] TEST: Vlan mcast_last_member_count global option default value [ OK ] TEST: Vlan mcast_last_member_interval global option default value [ OK ] TEST: Vlan 10 mcast_last_member_count option changed to 3 [ OK ] TEST: Vlan 10 mcast_last_member_interval option changed to 200 [ OK ] TEST: Vlan mcast_startup_query_interval global option default value [ OK ] TEST: Vlan mcast_startup_query_count global option default value [ OK ] TEST: Vlan 10 mcast_startup_query_interval option changed to 100 [ OK ] TEST: Vlan 10 mcast_startup_query_count option changed to 3 [ OK ] TEST: Vlan mcast_membership_interval global option default value [ OK ] TEST: Vlan 10 mcast_membership_interval option changed to 200 [ OK ] TEST: Vlan 10 mcast_membership_interval mdb entry expire [ OK ] TEST: Vlan mcast_querier_interval global option default value [ OK ] TEST: Vlan 10 mcast_querier_interval option changed to 100 [ OK ] TEST: Vlan 10 mcast_querier_interval expire after outside query [ OK ] TEST: Vlan mcast_query_interval global option default value [ OK ] TEST: Vlan 10 mcast_query_interval option changed to 200 [ OK ] TEST: Vlan mcast_query_response_interval global option default value [ OK ] TEST: Vlan 10 mcast_query_response_interval option changed to 200 [ OK ] TEST: Port vlan 10 option mcast_router default value [ OK ] TEST: Port vlan 10 mcast_router option changed to 2 [ OK ] TEST: Flood unknown vlan multicast packets to router port only [ OK ] TEST: Disable multicast vlan snooping when vlan filtering is disabled [ OK ] ==================== Link: https://lore.kernel.org/r/20211125140858.3639139-1-razor@blackwall.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nikolay Aleksandrov authored
Add a test for dependency of mcast_vlan_snooping on vlan_filtering. If vlan_filtering gets disabled, then mcast_vlan_snooping must be automatically disabled as well. TEST: Disable multicast vlan snooping when vlan filtering is disabled [ OK ] Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nikolay Aleksandrov authored
Add tests for the new per-port/vlan mcast_router option, verify that unknown multicast packets are flooded only to router ports. TEST: Port vlan 10 option mcast_router default value [ OK ] TEST: Port vlan 10 mcast_router option changed to 2 [ OK ] TEST: Flood unknown vlan multicast packets to router port only [ OK ] Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nikolay Aleksandrov authored
Add tests which change the new per-vlan mcast_query_interval and verify the new value is in effect, also add a test to change mcast_query_response_interval's value. TEST: Vlan mcast_query_interval global option default value [ OK ] TEST: Vlan 10 mcast_query_interval option changed to 200 [ OK ] TEST: Vlan mcast_query_response_interval global option default value [ OK ] TEST: Vlan 10 mcast_query_response_interval option changed to 200 [ OK ] Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nikolay Aleksandrov authored
Add tests which change the new per-vlan mcast_querier_interval and verify that it is taken into account when an outside querier is present. TEST: Vlan mcast_querier_interval global option default value [ OK ] TEST: Vlan 10 mcast_querier_interval option changed to 100 [ OK ] TEST: Vlan 10 mcast_querier_interval expire after outside query [ OK ] Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nikolay Aleksandrov authored
Add a test which changes the new per-vlan mcast_membership_interval and verifies that a newly learned mdb entry would expire in that interval. TEST: Vlan mcast_membership_interval global option default value [ OK ] TEST: Vlan 10 mcast_membership_interval option changed to 200 [ OK ] TEST: Vlan 10 mcast_membership_interval mdb entry expire [ OK ] Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nikolay Aleksandrov authored
Add tests which change the new per-vlan startup query count/interval options and verify the proper number of queries are sent in the expected interval. TEST: Vlan mcast_startup_query_interval global option default value [ OK ] TEST: Vlan mcast_startup_query_count global option default value [ OK ] TEST: Vlan 10 mcast_startup_query_interval option changed to 100 [ OK ] TEST: Vlan 10 mcast_startup_query_count option changed to 3 [ OK ] Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nikolay Aleksandrov authored
Add tests which verify the default values of mcast_last_member_count mcast_last_member_count and also try to change them. TEST: Vlan mcast_last_member_count global option default value [ OK ] TEST: Vlan mcast_last_member_interval global option default value [ OK ] TEST: Vlan 10 mcast_last_member_count option changed to 3 [ OK ] TEST: Vlan 10 mcast_last_member_interval option changed to 200 [ OK ] Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nikolay Aleksandrov authored
Add tests which change the new per-vlan IGMP/MLD versions and verify that proper tagged general query packets are sent. TEST: Vlan mcast_igmp_version global option default value [ OK ] TEST: Vlan mcast_mld_version global option default value [ OK ] TEST: Vlan 10 mcast_igmp_version option changed to 3 [ OK ] TEST: Vlan 10 tagged IGMPv3 general query sent [ OK ] TEST: Vlan 10 mcast_mld_version option changed to 2 [ OK ] TEST: Vlan 10 tagged MLDv2 general query sent [ OK ] Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nikolay Aleksandrov authored
Add a test to try the new global vlan mcast_querier control and also verify that tagged general query packets are properly generated when querier is enabled for a single vlan. TEST: Vlan mcast_querier global option default value [ OK ] TEST: Vlan 10 multicast querier enable [ OK ] TEST: Vlan 10 tagged IGMPv2 general query sent [ OK ] TEST: Vlan 10 tagged MLD general query sent [ OK ] Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Nikolay Aleksandrov authored
Add the first test for bridge per-vlan multicast snooping which checks if control of the global and per-vlan options work as expected, joins and leaves are tested at each option value. TEST: Vlan multicast snooping enable [ OK ] TEST: Vlan global options existence [ OK ] TEST: Vlan mcast_snooping global option default value [ OK ] TEST: Vlan 10 multicast snooping control [ OK ] Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 26 Nov, 2021 1 commit
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski authored
drivers/net/ipa/ipa_main.c 8afc7e47 ("net: ipa: separate disabling setup from modem stop") 76b5fbcd ("net: ipa: kill ipa_modem_init()") Duplicated include, drop one. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-