- 06 Feb, 2023 26 commits
-
-
Petr Machata authored
The testsuite that checks for mcast_max_groups functionality will need to wipe the added groups as well. Add helpers to build an IGMP or MLD packets announcing that host is leaving a given group. Signed-off-by: Petr Machata <petrm@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
The testsuite that checks for mcast_max_groups functionality will need to generate IGMP and MLD packets with configurable number of (S,G) addresses. To that end, further extend igmpv3_is_in_get() and mldv2_is_in_get() to allow a list of IP addresses instead of one address. Signed-off-by: Petr Machata <petrm@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
In order to generate IGMPv3 and MLDv2 packets on the fly, the functions that generate these packets need to be able to generate packets for different groups and different sources. Generating MLDv2 packets further needs the source address of the packet for purposes of checksum calculation. Add the necessary parameters, and generate the payload accordingly by dispatching to helpers added in the previous patches. Adjust the sole client, bridge_mdb.sh, as well. Signed-off-by: Petr Machata <petrm@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
In order to generate IGMPv3 and MLDv2 packets on the fly, we will need helpers to calculate the packet checksum. The approach presented in this patch revolves around payload templates for mausezahn. These are mausezahn-like payload strings (01:23:45:...) with possibly one 2-byte sequence replaced with the word PAYLOAD. The main function is payload_template_calc_checksum(), which calculates RFC 1071 checksum of the message. There are further helpers to then convert the checksum to the payload format, and to expand it. For IPv6, MLDv2 message checksum is computed using a pseudoheader that differs from the header used in the payload itself. The fact that the two messages are different means that the checksum needs to be returned as a separate quantity, instead of being expanded in-place in the payload itself. Furthermore, the pseudoheader includes a length of the message. Much like the checksum, this needs to be expanded in mausezahn format. And likewise for number of addresses for (S,G) entries. Thus we have several places where a computed quantity needs to be presented in the payload format. Add a helper u16_to_bytes(), which will be used in all these cases. Signed-off-by: Petr Machata <petrm@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
In order to generate IGMPv3 and MLDv2 packets on the fly, we will need helpers to expand IPv4 and IPv6 addresses given as parameters in mausezahn payload notation. Add helpers that do it. Signed-off-by: Petr Machata <petrm@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
Add the letter missing from the word "INCLUDE". Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
These functions will be helpful for other testsuites as well. Extract them to a common place. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
The previous patch added accounting for number of MDB entries per port and per port-VLAN, and the logic to verify that these values stay within configured bounds. However it didn't provide means to actually configure those bounds or read the occupancy. This patch does that. Two new netlink attributes are added for the MDB occupancy: IFLA_BRPORT_MCAST_N_GROUPS for the per-port occupancy and BRIDGE_VLANDB_ENTRY_MCAST_N_GROUPS for the per-port-VLAN occupancy. And another two for the maximum number of MDB entries: IFLA_BRPORT_MCAST_MAX_GROUPS for the per-port maximum, and BRIDGE_VLANDB_ENTRY_MCAST_MAX_GROUPS for the per-port-VLAN one. Note that the two new IFLA_BRPORT_ attributes prompt bumping of RTNL_SLAVE_MAX_TYPE to size the slave attribute tables large enough. The new attributes are used like this: # ip link add name br up type bridge vlan_filtering 1 mcast_snooping 1 \ mcast_vlan_snooping 1 mcast_querier 1 # ip link set dev v1 master br # bridge vlan add dev v1 vid 2 # bridge vlan set dev v1 vid 1 mcast_max_groups 1 # bridge mdb add dev br port v1 grp 230.1.2.3 temp vid 1 # bridge mdb add dev br port v1 grp 230.1.2.4 temp vid 1 Error: bridge: Port-VLAN is already in 1 groups, and mcast_max_groups=1. # bridge link set dev v1 mcast_max_groups 1 # bridge mdb add dev br port v1 grp 230.1.2.3 temp vid 2 Error: bridge: Port is already in 1 groups, and mcast_max_groups=1. # bridge -d link show 5: v1@v2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br [...] [...] mcast_n_groups 1 mcast_max_groups 1 # bridge -d vlan show port vlan-id br 1 PVID Egress Untagged state forwarding mcast_router 1 v1 1 PVID Egress Untagged [...] mcast_n_groups 1 mcast_max_groups 1 2 [...] mcast_n_groups 0 mcast_max_groups 0 Signed-off-by: Petr Machata <petrm@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
The MDB maintained by the bridge is limited. When the bridge is configured for IGMP / MLD snooping, a buggy or malicious client can easily exhaust its capacity. In SW datapath, the capacity is configurable through the IFLA_BR_MCAST_HASH_MAX parameter, but ultimately is finite. Obviously a similar limit exists in the HW datapath for purposes of offloading. In order to prevent the issue of unilateral exhaustion of MDB resources, introduce two parameters in each of two contexts: - Per-port and per-port-VLAN number of MDB entries that the port is member in. - Per-port and (when BROPT_MCAST_VLAN_SNOOPING_ENABLED is enabled) per-port-VLAN maximum permitted number of MDB entries, or 0 for no limit. The per-port multicast context is used for tracking of MDB entries for the port as a whole. This is available for all bridges. The per-port-VLAN multicast context is then only available on VLAN-filtering bridges on VLANs that have multicast snooping on. With these changes in place, it will be possible to configure MDB limit for bridge as a whole, or any one port as a whole, or any single port-VLAN. Note that unlike the global limit, exhaustion of the per-port and per-port-VLAN maximums does not cause disablement of multicast snooping. It is also permitted to configure the local limit larger than hash_max, even though that is not useful. In this patch, introduce only the accounting for number of entries, and the max field itself, but not the means to toggle the max. The next patch introduces the netlink APIs to toggle and read the values. Signed-off-by: Petr Machata <petrm@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
The following patch will add two more maximum MDB allowances to the global one, mcast_hash_max, that exists today. In all these cases, attempts to add MDB entries above the configured maximums through netlink, fail noisily and obviously. Such visibility is missing when adding entries through the control plane traffic, by IGMP or MLD packets. To improve visibility in those cases, add a trace point that reports the violation, including the relevant netdevice (be it a slave or the bridge itself), and the MDB entry parameters: # perf record -e bridge:br_mdb_full & # [...] # perf script | cut -d: -f4- dev v2 af 2 src ::ffff:0.0.0.0 grp ::ffff:239.1.1.112/00:00:00:00:00:00 vid 0 dev v2 af 10 src :: grp ff0e::112/00:00:00:00:00:00 vid 0 dev v2 af 2 src ::ffff:0.0.0.0 grp ::ffff:239.1.1.112/00:00:00:00:00:00 vid 10 dev v2 af 10 src 2001:db8:1::1 grp ff0e::1/00:00:00:00:00:00 vid 10 dev v2 af 2 src ::ffff:192.0.2.1 grp ::ffff:239.1.1.1/00:00:00:00:00:00 vid 10 CC: Steven Rostedt <rostedt@goodmis.org> CC: linux-trace-kernel@vger.kernel.org Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
This function is getting more to clean up in the following patches. Structuring the cleanups in one labeled block will allow reusing the same cleanup from several places. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
Since cleaning up the effects of br_multicast_new_port_group() just consists of delisting and freeing the memory, the function br_mdb_add_group_star_g() inlines the corresponding code. In the following patches, number of per-port and per-port-VLAN MDB entries is going to be maintained, and that counter will have to be updated. Because that logic is going to be hidden in the br_multicast module, introduce a new hook intended to again remove a newly-created group. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
Now that br_multicast_new_port_group() takes an extack argument, move setting the extack there. The downside is that the error messages end up being less specific (the function cannot distinguish between (S,G) and (*,G) groups). However, the alternative is to check in the caller whether the callee set the extack, and if it didn't, set it. But that is only done when the callee is not exactly known. (E.g. in case of a notifier invocation.) Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
Make it possible to set an extack in br_multicast_new_port_group(). Eventually, this function will check for per-port and per-port-vlan MDB maximums, and will use the extack to communicate the reason for the bounce. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
Make any attributes newly-added to br_port_policy or vlan_tunnel_policy parsed strictly, to prevent userspace from passing garbage. Note that this patchset only touches the former policy. The latter was adjusted for completeness' sake. There do not appear to be other _deprecated calls with non-NULL policies. Suggested-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Daniel Machon says: ==================== net: Add support for PSFP in Sparx5 ================================================================================ Add support for Per-Stream Filtering and Policing (802.1Q-2018, 8.6.5.1). ================================================================================ The VCAP CLM (VCAP IS0 ingress classifier) classifies streams, identified by ISDX (Ingress Service Index, frame metadata), and maps ISDX to streams. Flow meters are also classified by ISDX, and implemented using service policers (Service Dual Leacky Buckets, SDLB). Leacky buckets are linked together in a leak chain of a leak group. Leak groups a preconfigured to serve buckets within a certain rate interval. Stream gates are time-based policers used by PSFP. Frames are dropped based on the gate state (OPEN/ CLOSE), whose state will be altered based on the Gate Control List (GCL) and current PTP time. Apart from time-based policing, stream gates can alter egress queue selection for the frames that pass through the Gate. This is done through Internal Priority Selector (IPS). Stream gates are mapped from stream filters. Support for tc actions gate and police, have been added to the VCAP IS0 set of supported actions. Examples: // tc filter with gate action $ tc filter add dev eth1 ingress chain 1100000 prio 1 handle 1001 protocol \ 802.1q flower skip_sw vlan_id 100 action gate base-time 0 sched-entry open \ 700000 7 8m sched-entry close 300000 action goto chain 1200000 // tc filter with police action $ tc filter add dev eth1 ingress chain 1100000 prio 1 handle 1002 protocol \ 802.1q flower skip_sw vlan_id 100 action police rate 1gbit burst 8096 \ conform-exceed drop action goto chain 1200000 ================================================================================ Patches ================================================================================ Patch #1: Adds new register needed for PSFP. Patch #2: Adds resource pools to control PSFP needed chip resources. Patch #3: Adds support for SDLB's needed for flow-meters. Patch #4: Adds support for service policers. Patch #5: Adds support for PSFP flow-meters, using service policers. Patch #6: Adds a new function to calculate basetime, required by flow-meters. Patch #7: Adds support for PSFP stream gates. Patch #8: Adds support for PSFP stream filters. Patch #9: Adds a function to initialize flow-meters, stream gates and stream filters. Patch #10: Adds the required flower code to configure PSFP using the tc command. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Machon authored
Add support for tc actions gate and police, in order to implement support for configuring PSFP through tc. Signed-off-by: Daniel Machon <daniel.machon@microchip.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Machon authored
Initialize the SDLB's, stream gates and stream filters. Signed-off-by: Daniel Machon <daniel.machon@microchip.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Machon authored
Add support for configuring PSFP stream filters (IEEE 802.1Q-2018, 8.6.5.1.1). The VCAP CLM (VCAP IS0 ingress classifier) classifies streams, identified by ISDX (Ingress Service Index, frame metadata), and maps ISDX to streams. Signed-off-by: Daniel Machon <daniel.machon@microchip.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Machon authored
Add support for configuring PSFP stream gates (IEEE 802.1Q-2018, 8.6.5.1.2). Stream gates are time-based policers used by PSFP. Frames are dropped based on the gate state (OPEN/ CLOSE), whose state will be altered based on the Gate Control List (GCL) and current PTP time. Apart from time-based policing, stream gates can alter egress queue selection for the frames that pass through the Gate. This is done through Internal Priority Selector (IPS). Stream gates are mapped from stream filters. Signed-off-by: Daniel Machon <daniel.machon@microchip.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Machon authored
Add a new function for calculating PTP basetime, required by the stream gate scheduler to calculate gate state (open / close). Signed-off-by: Daniel Machon <daniel.machon@microchip.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Machon authored
Add support for configuring PSFP flow-meters (IEEE 802.1Q-2018, 8.6.5.1.3). The VCAP CLM (VCAP IS0 ingress classifier) classifies streams, identified by ISDX (Ingress Service Index, frame metadata), and maps ISDX to flow-meters. SDLB's provide the flow-meter parameters. Signed-off-by: Daniel Machon <daniel.machon@microchip.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Machon authored
Add initial API for configuring policers. This patch add support for service policers. Signed-off-by: Daniel Machon <daniel.machon@microchip.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Machon authored
Add support for Service Dual Leacky Buckets (SDLB), used to implement PSFP flow-meters. Buckets are linked together in a leak chain of a leak group. Leak groups a preconfigured to serve buckets within a certain rate interval. Signed-off-by: Daniel Machon <daniel.machon@microchip.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Machon authored
Add resource pools and accessor functions. These pools can be queried by the driver, whenever a finite resource is required. Some resources can be reused, in which case an index and a reference count is used to keep track of users. Signed-off-by: Daniel Machon <daniel.machon@microchip.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Machon authored
Add registers needed for PSFP. This patch also renames a single register, shortening its name (SYS_CLK_PER_100PS). Uses have been update accordingly. Signed-off-by: Daniel Machon <daniel.machon@microchip.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 04 Feb, 2023 14 commits
-
-
David S. Miller authored
D. Wythe says: ==================== net/smc: optimize the parallelism of SMC-R connections This patch set attempts to optimize the parallelism of SMC-R connections, mainly to reduce unnecessary blocking on locks, and to fix exceptions that occur after thoses optimization. According to Off-CPU graph, SMC worker's off-CPU as that: smc_close_passive_work (1.09%) smcr_buf_unuse (1.08%) smc_llc_flow_initiate (1.02%) smc_listen_work (48.17%) __mutex_lock.isra.11 (47.96%) An ideal SMC-R connection process should only block on the IO events of the network, but it's quite clear that the SMC-R connection now is queued on the lock most of the time. The goal of this patchset is to achieve our ideal situation where network IO events are blocked for the majority of the connection lifetime. There are three big locks here: 1. smc_client_lgr_pending & smc_server_lgr_pending 2. llc_conf_mutex 3. rmbs_lock & sndbufs_lock And an implementation issue: 1. confirm/delete rkey msg can't be sent concurrently while protocol allows indeed. Unfortunately,The above problems together affect the parallelism of SMC-R connection. If any of them are not solved. our goal cannot be achieved. After this patch set, we can get a quite ideal off-CPU graph as following: smc_close_passive_work (41.58%) smcr_buf_unuse (41.57%) smc_llc_do_delete_rkey (41.57%) smc_listen_work (39.10%) smc_clc_wait_msg (13.18%) tcp_recvmsg_locked (13.18) smc_listen_find_device (25.87%) smcr_lgr_reg_rmbs (25.87%) smc_llc_do_confirm_rkey (25.87%) We can see that most of the waiting times are waiting for network IO events. This also has a certain performance improvement on our short-lived conenction wrk/nginx benchmark test: +--------------+------+------+-------+--------+------+--------+ |conns/qps |c4 | c8 | c16 | c32 | c64 | c200 | +--------------+------+------+-------+--------+------+--------+ |SMC-R before |9.7k | 10k | 10k | 9.9k | 9.1k | 8.9k | +--------------+------+------+-------+--------+------+--------+ |SMC-R now |13k | 19k | 18k | 16k | 15k | 12k | +--------------+------+------+-------+--------+------+--------+ |TCP |15k | 35k | 51k | 80k | 100k | 162k | +--------------+------+------+-------+--------+------+--------+ The reason why the benefit is not obvious after the number of connections has increased dues to workqueue. If we try to change workqueue to UNBOUND, we can obtain at least 4-5 times performance improvement, reach up to half of TCP. However, this is not an elegant solution, the optimization of it will be much more complicated. But in any case, we will submit relevant optimization patches as soon as possible. Please note that the premise here is that the lock related problem must be solved first, otherwise, no matter how we optimize the workqueue, there won't be much improvement. Because there are a lot of related changes to the code, if you have any questions or suggestions, please let me know. Thanks D. Wythe v1 -> v2: 1. Fix panic in SMC-D scenario 2. Fix lnkc related hashfn calculation exception, caused by operator priority 3. Only wake up one connection if the lnk is not active 4. Delete obsolete unlock logic in smc_listen_work() 5. PATCH format, do Reverse Christmas tree 6. PATCH format, change all xxx_lnk_xxx function to xxx_link_xxx 7. PATCH format, add correct fix tag for the patches for fixes. 8. PATCH format, fix some spelling error 9. PATCH format, rename slow to do_slow v2 -> v3: 1. add SMC-D support, remove the concept of link cluster since SMC-D has no link at all. Replace it by lgr decision maker, who provides suggestions to SMC-D and SMC-R on whether to create new link group. 2. Fix the corruption problem described by PATCH 'fix application data exception' on SMC-D. v3 -> v4: 1. Fix panic caused by uninitialization map. v4 -> v5: 1. Make SMC-D buf creation be serial to avoid Potential error 2. Add a flag to synchronize the success of the first contact with the ready of the link group, including SMC-D and SMC-R. 3. Fixed possible reference count leak in smc_llc_flow_start(). 4. reorder the patch, make bugfix PATCH be ahead. v5 -> v6: 1. Separate the bugfix patches to make it independent. 2. Merge patch 'fix SMC_CLC_DECL_ERR_REGRMB without smc_server_lgr_pending' with patch 'remove locks smc_client_lgr_pending and smc_server_lgr_pending' 3. Format code styles, including alignment and reverse christmas tree style. 4. Fix a possible memory leak in smc_llc_rmt_delete_rkey() and smc_llc_rmt_conf_rkey(). v6 -> v7: 1. Discard patch attempting to remove global locks 2. Discard patch attempting make confirm/delete rkey process concurrently ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
D. Wythe authored
It's clear that rmbs_lock and sndbufs_lock are aims to protect the rmbs list or the sndbufs list. During connection establieshment, smc_buf_get_slot() will always be invoked, and it only performs read semantics in rmbs list and sndbufs list. Based on the above considerations, we replace mutex with rw_semaphore. Only smc_buf_get_slot() use down_read() to allow smc_buf_get_slot() run concurrently, other part use down_write() to keep exclusive semantics. Signed-off-by: D. Wythe <alibuda@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
D. Wythe authored
Unlike smc_buf_create() and smcr_buf_unuse(), smcr_lgr_reg_rmbs() is exclusive when assigned rmb_desc was not registered, although it can be executed in parallel when assigned rmb_desc was registered already and only performs read semtamics on it. Hence, we can not simply replace it with read semaphore. The idea here is that if the assigned rmb_desc was registered already, use read semaphore to protect the critical section, once the assigned rmb_desc was not registered, keep using keep write semaphore still to keep its exclusivity. Thanks to the reusable features of rmb_desc, which allows us to execute in parallel in most cases. Signed-off-by: D. Wythe <alibuda@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
D. Wythe authored
Following is part of Off-CPU graph during frequent SMC-R short-lived processing: process_one_work (51.19%) smc_close_passive_work (28.36%) smcr_buf_unuse (28.34%) rwsem_down_write_slowpath (28.22%) smc_listen_work (22.83%) smc_clc_wait_msg (1.84%) smc_buf_create (20.45%) smcr_buf_map_usable_links rwsem_down_write_slowpath (20.43%) smcr_lgr_reg_rmbs (0.53%) rwsem_down_write_slowpath (0.43%) smc_llc_do_confirm_rkey (0.08%) We can clearly see that during the connection establishment time, waiting time of connections is not on IO, but on llc_conf_mutex. What is more important, the core critical area (smcr_buf_unuse() & smc_buf_create()) only perfroms read semantics on links, we can easily replace it with read semaphore. Signed-off-by: D. Wythe <alibuda@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
D. Wythe authored
llc_conf_mutex was used to protect links and link related configurations in the same link group, for example, add or delete links. However, in most cases, the protected critical area has only read semantics and with no write semantics at all, such as obtaining a usable link or an available rmb_desc. This patch do simply code refactoring, replace mutex with rw_semaphore, replace mutex_lock with down_write and replace mutex_unlock with up_write. Theoretically, this replacement is equivalent, but after this patch, we can distinguish lock granularity according to different semantics of critical areas. Signed-off-by: D. Wythe <alibuda@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Vladimir Oltean says: ==================== Updates to ENETC TXQ management The set ensures that the number of TXQs given by enetc to the network stack (mqprio or TX hashing) + the number of TXQs given to XDP never exceeds the number of available TXQs. These are the first 4 patches of series "[v5,net-next,00/17] ENETC mqprio/taprio cleanup" from here: https://patchwork.kernel.org/project/netdevbpf/cover/20230202003621.2679603-1-vladimir.oltean@nxp.com/ There is no change in this version compared to there. I split them off because this contains a fix for net-next and it would be good if it could go in quickly. I also did it to reduce the patch count of that other series, if I need to respin it again. ==================== Link: https://lore.kernel.org/r/20230203001116.3814809-1-vladimir.oltean@nxp.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
Currently it can happen that an mqprio qdisc is installed with num_tc 8, and this will reserve 8 (out of 8) TXQs for the network stack. Then we can attach an XDP program, and this will crop 2 TXQs, leaving just 6 for mqprio. That's not what the user requested, and we should fail it. On the other hand, if mqprio isn't requested, we still give the 8 TXQs to the network stack (with hashing among a single traffic class), but then, cropping 2 TXQs for XDP is fine, because the user didn't explicitly ask for any number of TXQs, so no expectations are violated. Simply put, the logic that mqprio should impose a minimum number of TXQs for the network never existed. Let's say (more or less arbitrarily) that without mqprio, the driver expects a minimum number of TXQs equal to the number of CPUs (on NXP LS1028A, that is either 1, or 2). And with mqprio, mqprio gives the minimum required number of TXQs. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
Since the blamed net-next commit, enetc_setup_xdp_prog() no longer goes through enetc_open(), and therefore, the function which was supposed to detect whether a BPF program exists (in order to crop some TX queues from network stack usage), enetc_num_stack_tx_queues(), no longer gets called. We can move the netif_set_real_num_rx_queues() call to enetc_alloc_msix() (probe time), since it is a runtime invariant. We can do the same thing with netif_set_real_num_tx_queues(), and let enetc_reconfigure_xdp_cb() explicitly recalculate and change the number of stack TX queues. Fixes: c33bfaf9 ("net: enetc: set up XDP program under enetc_reconfigure()") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
enetc_reconfigure() was modified in commit c33bfaf9 ("net: enetc: set up XDP program under enetc_reconfigure()") to take an optional callback that runs while the netdev is down, but this callback currently cannot fail. Code up the error handling so that the interface is restarted with the old resources if the callback fails. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
We keep a pointer to the xdp_prog in the private netdev structure as well; what's replicated per RX ring is done so just for more convenient access from the NAPI poll procedure. Simplify enetc_num_stack_tx_queues() by looking at priv->xdp_prog rather than iterating through the information replicated per RX ring. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Eric Dumazet says: ==================== raw: add drop reasons and use another hash function Two first patches add drop reasons to raw input processing. Last patch spreads RAW sockets in the shared hash tables to avoid long hash buckets in some cases. ==================== Link: https://lore.kernel.org/r/20230202094100.3083177-1-edumazet@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
Some applications seem to rely on RAW sockets. If they use private netns, we can avoid piling all RAW sockets bound to a given protocol into a single bucket. Also place (struct raw_hashinfo).lock into its own cache line to limit false sharing. Alternative would be to have per-netns hashtables, but this seems too expensive for most netns where RAW sockets are not used. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
Use existing helpers and drop reason codes for RAW input path. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
Use existing helpers and drop reason codes for RAW input path. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-