- 17 Apr, 2023 7 commits
-
-
Paolo Abeni authored
After commit 3a236aef ("mptcp: refactor passive socket initialization"), every mptcp_pm_fully_established() call is always invoked with a GFP_ATOMIC argument. We can then drop it. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linuxDavid S. Miller authored
mlx5-updates-2023-04-14 Yevgeny Kliteynik Says: ======================= SW Steering: Support pattern/args modify_header actions The following patch series adds support for a new pattern/arguments type of modify_header actions. Starting with ConnectX-6 DX, we use a new design of modify_header FW object. The current modify_header object allows for having only limited number of these FW objects, which means that we are limited in the number of offloaded flows that require modify_header action. The new approach comprises of two types of objects: pattern and argument. Pattern holds header modification templates, later used with corresponding argument object to create complete header modification actions. The pattern indicates which headers are modified, while the arguments provide the specific values. Therefore a single pattern can be used with different arguments in different flows, enabling offloading of large number of modify_header flows. - Patch 1, 2: Add ICM pool for modify-header-pattern objects and implement patterns cache, allowing patterns reuse for different flows - Patch 3: Allow for chunk allocation separately for STEv0 and STEv1 - Patch 4: Read related device capabilities - Patch 5: Add create/destroy functions for the new general object type - Patch 6: Add support for writing modify header argument to ICM - Patch 7, 8: Some required fixes to support pattern/arg - separate read buffer from the write buffer and fix QP continuous allocation - Patch 9: Add pool for modify header arg objects - Patch 10, 11, 12: Implement MODIFY_HEADER and TNL_L3_TO_L2 actions with the new patterns/args design - Patch 13: Optimization - set modify header action of size 1 directly on the STE instead of separate pattern/args combination - Patch 14: Adjust debug dump for patterns/args - Patch 15: Enable patterns and arguments for supporting devices =======================
-
David S. Miller authored
Aaron Conole says: ==================== selftests: openvswitch: add support for testing upcall interface The existing selftest suite for openvswitch will work for regression testing the datapath feature bits, but won't test things like adding interfaces, or the upcall interface. Here, we add some additional test facilities. First, extend the ovs-dpctl.py python module to support the OVS_FLOW and OVS_PACKET netlink families, with some associated messages. These can be extended over time, but the initial support is for more well known cases (output, userspace, and CT). Next, extend the test suite to test upcalls by adding a datapath, monitoring the upcall socket associated with the datapath, and then dumping any upcalls that are received. Compare with expected ARP upcall via arping. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Aaron Conole authored
The upcall socket interface can be exercised now to make sure that future feature adjustments to the field can maintain backwards compatibility. Signed-off-by: Aaron Conole <aconole@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Aaron Conole authored
Add a basic set of fields to print in a 'dpflow' format. This will be used by future commits to check for flow fields after parsing, as well as verifying the flow fields pushed into the kernel from userspace. Signed-off-by: Aaron Conole <aconole@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Aaron Conole authored
Includes an associated test to generate netns and connect interfaces, with the option to include packet tracing. This will be used in the future when flow support is added for additional test cases. Signed-off-by: Aaron Conole <aconole@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Horatiu Vultur authored
If the 1PPS output was enabled and then lan8841 was configured to be a follower, then target clock which is used to generate the 1PPS was not configure correctly. The problem was that for each adjustments of the time, also the nanosecond part of the target clock was changed. Therefore the initial nanosecond part of the target clock was changed. The issue can be observed if both the leader and the follower are generating 1PPS and see that their PPS are not aligned even if the time is allined. The fix consists of not modifying the nanosecond part of the target clock when adjusting the time. In this way the 1PPS get also aligned. Fixes: e4ed8ba0 ("net: phy: micrel: Add support for PTP_PF_PEROUT for lan8841") Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 15 Apr, 2023 4 commits
-
-
Jakub Kicinski authored
Jakub Kicinski says: ==================== page_pool: allow caching from safely localized NAPI I went back to the explicit "are we in NAPI method", mostly because I don't like having both around :( (even tho I maintain that in_softirq() && !in_hardirq() is as safe, as softirqs do not nest). Still returning the skbs to a CPU, tho, not to the NAPI instance. I reckon we could create a small refcounted struct per NAPI instance which would allow sockets and other users so hold a persisent and safe reference. But that's a bigger change, and I get 90+% recycling thru the cache with just these patches (for RR and streaming tests with 100% CPU use it's almost 100%). Some numbers for streaming test with 100% CPU use (from previous version, but really they perform the same): HW-GRO page=page before after before after recycle: cached: 0 138669686 0 150197505 cache_full: 0 223391 0 74582 ring: 138551933 9997191 149299454 0 ring_full: 0 488 3154 127590 released_refcnt: 0 0 0 0 alloc: fast: 136491361 148615710 146969587 150322859 slow: 1772 1799 144 105 slow_high_order: 0 0 0 0 empty: 1772 1799 144 105 refill: 2165245 156302 2332880 2128 waive: 0 0 0 0 v1: https://lore.kernel.org/all/20230411201800.596103-1-kuba@kernel.org/ rfcv2: https://lore.kernel.org/all/20230405232100.103392-1-kuba@kernel.org/ ==================== Link: https://lore.kernel.org/r/20230413042605.895677-1-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
bnxt has 1:1 mapping of page pools and NAPIs, so it's safe to hoook them up together. Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Tested-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Recent patches to mlx5 mentioned a regression when moving from driver local page pool to only using the generic page pool code. Page pool has two recycling paths (1) direct one, which runs in safe NAPI context (basically consumer context, so producing can be lockless); and (2) via a ptr_ring, which takes a spin lock because the freeing can happen from any CPU; producer and consumer may run concurrently. Since the page pool code was added, Eric introduced a revised version of deferred skb freeing. TCP skbs are now usually returned to the CPU which allocated them, and freed in softirq context. This places the freeing (producing of pages back to the pool) enticingly close to the allocation (consumer). If we can prove that we're freeing in the same softirq context in which the consumer NAPI will run - lockless use of the cache is perfectly fine, no need for the lock. Let drivers link the page pool to a NAPI instance. If the NAPI instance is scheduled on the same CPU on which we're freeing - place the pages in the direct cache. With that and patched bnxt (XDP enabled to engage the page pool, sigh, bnxt really needs page pool work :() I see a 2.6% perf boost with a TCP stream test (app on a different physical core than softirq). The CPU use of relevant functions decreases as expected: page_pool_refill_alloc_cache 1.17% -> 0% _raw_spin_lock 2.41% -> 0.98% Only consider lockless path to be safe when NAPI is scheduled - in practice this should cover majority if not all of steady state workloads. It's usually the NAPI kicking in that causes the skb flush. The main case we'll miss out on is when application runs on the same CPU as NAPI. In that case we don't use the deferred skb free path. Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Tested-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
We maintain a NAPI-local cache of skbs which is fed by napi_consume_skb(). Going forward we will also try to cache head and data pages. Plumb the "are we in a normal NAPI context" information thru deeper into the freeing path, up to skb_release_data() and skb_free_head()/skb_pp_recycle(). The "not normal NAPI context" comes from netpoll which passes budget of 0 to try to reap the Tx completions but not perform any Rx. Use "bool napi_safe" rather than bare "int budget", the further we get from NAPI the more confusing the budget argument may seem (particularly whether 0 or MAX is the correct value to pass in when not in NAPI). Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Tested-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 14 Apr, 2023 29 commits
-
-
Yevgeny Kliteynik authored
Check if patterns and arguments for modify header action are supported and enable them accordingly. Signed-off-by: Muhammad Sammar <muhammads@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Support the pattern/args-based MODIFY_HDR and TNL_L3_TO_L2 actions in dbg dump Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Set modify header action of size 1 directly on the STE for supporting devices, thus reducing number of hops and cache misses. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Use the new accelerated action for decap L3 on RX side: use the mechanism of pattern and argument same as in modify-header action. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
If there is support for pattern/args, use the new accelerated modify header action for modify header and decap L3 actions. Otherwise fall back to the old modify-header implementation. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
While building the actions, add the pointer of the arguments for accelerated modify list action into the action's attributes. This will be used later on while building the specific STE for this action. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Added new mechanism for handling arguments for modify-header action. The new action "accelerated modify-header" asks for the arguments from separated area from the pattern, this area accessed via general objects. Handling of these object is done via the pool-manager struct. When the new header patterns are supported, while loading the domain, a few pools for argument creations will be created. The requests for allocating/deallocating arg objects are done via the pool manager API. Signed-off-by: Muhammad Sammar <muhammads@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
When allocating a QP we allocate an RQ and an SQ, the RQ is stored first in memory and followed by the SQ. This allocation is not physically continiuos - it may span across different physical pages. SW Steering code always writes in pairs: 1BB write + 1BB read, or 2 continuous BBs of GTA WQE. This lead to an issue where RQ allocation was 4x16 which is equal to 1 WQE BB, causing 1 BB offset in the page and splitting the GTA WQE between different physical pages. The solution was to create the RQ with a even number of BBs and to have the RQ aligned to a page. Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Instead of using the write buffer for reading we will use a dedicated buffer only for reading ICM memory. Due to the new support for args, we can have a case with pending_wc being odd number, and with reading into the same write buffer, it is possible to overwrite next write on the same slot. For example: pending_wc is 17 so the buffer for write is: | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | and we have requests as follows: r wr wr wr wr wr wr wr wr Now, the first read will be written into the last write because we use the same buffer for read and write, before it was written to the HW and we will have a wrong data in the ICM area. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
The accelerated modify header arguments are written in the HW area with special WQE and specific data format. New function was added to support writing of new argument type. Note that GTA WQE is larger than READ and WRITE, so the queue management logic was updated to support this. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Add functions for creation/destruction of the new type of general object. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
This way we are able to allocate chunk for modify_headers from 2 types: STEv0 that is allocated from the action area, and STEv1 that is allocating the chunks from the special area for patterns. Signed-off-by: Muhammad Sammar <muhammads@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Starting with ConnectX-6 Dx, we use new design of modify_header FW object. The current modify_header object allows for having only limited number of FW objects, so the new design of pattern and argument allows pattern reuse, saving memory, and having a large number of modify_header objects. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Move ACTION_CACHE_LINE_SIZE macro to header to be used by the pattern functions as well. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
David S. Miller authored
Kevin Brodsky says: ==================== net: Finish up ->msg_control{,_user} split Commit 1f466e1f ("net: cleanly handle kernel vs user buffers for ->msg_control") introduced the msg_control_user and msg_control_is_user fields in struct msghdr, to ensure that user pointers are represented as such. It also took care of converting most users of struct msghdr::msg_control where user pointers are involved. It did however miss a number of cases, and some code using msg_control inappropriately has also appeared in the meantime. This series is attempting to complete the split, by eliminating the remaining cases where msg_control is used when in fact a user pointer is stored in the union (patch 1). It also addresses a couple of issues with msg_control_is_user: one where it is not updated as it should (patch 2), and one where it is not initialised (patch 3). v1..v2: * Split out the msg_control_is_user fixes into separate patches. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kevin Brodsky authored
do_ipv6_setsockopt() makes use of struct msghdr::msg_control in the IPV6_2292PKTOPTIONS case. Make sure to initialise msg_control_is_user accordingly. Cc: Christoph Hellwig <hch@lst.de> Cc: Eric Dumazet <edumazet@google.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kevin Brodsky authored
cmsghdr_from_user_compat_to_kern() is an unusual case w.r.t. how the kmsg->msg_control* fields are used. The input struct msghdr holds a pointer to a user buffer, i.e. ksmg->msg_control_user is active. However, upon success, a kernel pointer is stored in kmsg->msg_control. kmsg->msg_control_is_user should therefore be updated accordingly. Cc: Christoph Hellwig <hch@lst.de> Cc: Eric Dumazet <edumazet@google.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kevin Brodsky authored
Since commit 1f466e1f ("net: cleanly handle kernel vs user buffers for ->msg_control"), pointers to user buffers should be stored in struct msghdr::msg_control_user, instead of the msg_control field. Most users of msg_control have already been converted (where user buffers are involved), but not all of them. This patch attempts to address the remaining cases. An exception is made for null checks, as it should be safe to use msg_control unconditionally for that purpose. Cc: Christoph Hellwig <hch@lst.de> Cc: Eric Dumazet <edumazet@google.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Arseniy Krasnov authored
This replaces 'skb_queue_tail()' with 'virtio_vsock_skb_queue_tail()'. The first one uses 'spin_lock_irqsave()', second uses 'spin_lock_bh()'. There is no need to disable interrupts in the loopback transport as there is no access to the queue with skbs from interrupt context. Both virtio and vhost transports work in the same way. Signed-off-by: Arseniy Krasnov <AVKrasnov@sberdevices.ru> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Haiyang Zhang says: ==================== net: mana: Add support for jumbo frame The set adds support for jumbo frame, with some optimization for the RX path. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Haiyang Zhang authored
During probe, get the hardware-allowed max MTU by querying the device configuration. Users can select MTU up to the device limit. When XDP is in use, limit MTU settings so the buffer size is within one page. And, when MTU is set to a too large value, XDP is not allowed to run. Also, to prevent changing MTU fails, and leaves the NIC in a bad state, pre-allocate all buffers before starting the change. So in low memory condition, it will return error, without affecting the NIC. Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Haiyang Zhang authored
Update RX data path to allocate and use RX queue DMA buffers with proper size based on potentially various MTU sizes. Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Haiyang Zhang authored
Move out common buffer allocation code from mana_process_rx_cqe() and mana_alloc_rx_wqe() to helper functions. Refactor related variables so they can be changed in one place, and buffer sizes are in sync. Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Haiyang Zhang authored
Use napi_build_skb() instead of build_skb() to take advantage of the NAPI percpu caches to obtain skbuff_head. Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linuxJakub Kicinski authored
Saeed Mahameed says: ==================== mlx5-updates-2023-04-11 1) Vlad adds the support for linux bridge multicast offload support Patches #1 through #9 Synopsis Vlad Says: ============== Implement support of bridge multicast offload in mlx5. Handle port object attribute SWITCHDEV_ATTR_ID_BRIDGE_MC_DISABLED notification to toggle multicast offload and bridge snooping support on bridge. Handle port object SWITCHDEV_OBJ_ID_PORT_MDB notification to attach a bridge port to MDB. Steering architecture Existing offload infrastructure relies on two levels of flow tables - bridge ingress and egress. For multicast offload the architecture is extended with additional layer of per-port multicast replication tables. Such tables filter loopback traffic (so packets are not replicated to their source port) and pop VLAN headers for "untagged" VLANs. The tables are referenced by the MDB rules in egress table. MDB egress rule can point to multiple per-port multicast tables, which causes matching multicast traffic to be replicated to all of them, and, consecutively, to several bridge ports: +--------+--+ +---------------------------------------> Port 1 | | | +-^------+--+ | | | | +-----------------------------------------+ | +---------------------------+ | | EGRESS table | | +--> PORT 1 multicast table | | +----------------------------------+ +-----------------------------------------+ | | +---------------------------+ | | INGRESS table | | | | | | | | +----------------------------------+ | dst_mac=P1,vlan=X -> pop vlan, goto P1 +--+ | | FG0: | | | | | dst_mac=P1,vlan=Y -> pop vlan, goto P1 | | | src_port=dst_port -> drop | | | src_mac=M1,vlan=X -> goto egress +---> dst_mac=P2,vlan=X -> pop vlan, goto P2 +--+ | | FG1: | | | ... | | dst_mac=P2,vlan=Y -> goto P2 | | | | VLAN X -> pop, goto port | | | | | dst_mac=MDB1,vlan=Y -> goto mcast P1,P2 +-----+ | ... | | +----------------------------------+ | | | | | VLAN Y -> pop, goto port +-------+ +-----------------------------------------+ | | | FG3: | | | | matchall -> goto port | | | | | | | +---------------------------+ | | | | | | +--------+--+ +---------------------------------------> Port 2 | | | +-^------+--+ | | | | | +---------------------------+ | +--> PORT 2 multicast table | | +---------------------------+ | | | | | FG0: | | | src_port=dst_port -> drop | | | FG1: | | | VLAN X -> pop, goto port | | | ... | | | | | | FG3: | | | matchall -> goto port +-------+ | | +---------------------------+ Patches overview: - Patch 1 adds hardware definition bits for capabilities required to replicate multicast packets to multiple per-port tables. These bits are used by following patches to only attempt multicast offload if firmware and hardware provide necessary support. - Pathces 2-4 patches are preparations and refactoring. - Patch 5 implements necessary infrastructure to toggle multicast offload via SWITCHDEV_ATTR_ID_BRIDGE_MC_DISABLED port object attribute notification. This also enabled IGMP and MLD snooping. - Patch 6 implements per-port multicast replication tables. It only supports filtering of loopback packets. - Patch 7 extends per-port multicast tables with VLAN pop support for 'untagged' VLANs. - Patch 8 handles SWITCHDEV_OBJ_ID_PORT_MDB port object notifications. It creates MDB replication rules in egress table that can replicate packets to multiple per-port multicast tables. - Patch 9 adds tracepoints for MDB events. ============== 2) Parav Create a new allocation profile for SFs, to save on memory 3) Yevgeny provides some initial patches for upcoming software steering support new pattern/arguments type of modify_header actions. Starting with ConnectX-6 DX, we use a new design of modify_header FW object. The current modify_header object allows for having only limited number of these FW objects, which means that we are limited in the number of offloaded flows that require modify_header action. As a preparation Yevgeny provides the following 4 patches: - Patch 1: Add required mlx5_ifc HW bits - Patch 2, 3: Add new WQE type and opcode that is required for pattern/arg support and adds appropriate support in dr_send.c - Patch 4: Add ICM pool for modify-header-pattern objects and implement patterns cache, allowing patterns reuse for different flows * tag 'mlx5-updates-2023-04-11' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux: net/mlx5: DR, Add modify-header-pattern ICM pool net/mlx5: DR, Prepare sending new WQE type net/mlx5: Add new WQE for updating flow table net/mlx5: Add mlx5_ifc bits for modify header argument net/mlx5: DR, Set counter ID on the last STE for STEv1 TX net/mlx5: Create a new profile for SFs net/mlx5: Bridge, add tracepoints for multicast net/mlx5: Bridge, implement mdb offload net/mlx5: Bridge, support multicast VLAN pop net/mlx5: Bridge, add per-port multicast replication tables net/mlx5: Bridge, snoop igmp/mld packets net/mlx5: Bridge, extract code to lookup parent bridge of port net/mlx5: Bridge, move additional data structures to priv header net/mlx5: Bridge, increase bridge tables sizes net/mlx5: Add mlx5_ifc definitions for bridge multicast support ==================== Link: https://lore.kernel.org/r/20230412040752.14220-1-saeed@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Vladimir Oltean says: ==================== Add kernel tc-mqprio and tc-taprio support for preemptible traffic classes The last RFC in August 2022 contained a proposal for the UAPI of both TSN standards which together form Frame Preemption (802.1Q and 802.3): https://lore.kernel.org/netdev/20220816222920.1952936-1-vladimir.oltean@nxp.com/ It wasn't clear at the time whether the 802.1Q portion of Frame Preemption should be exposed via the tc qdisc (mqprio, taprio) or via some other layer (perhaps also ethtool like the 802.3 portion, or dcbnl), even though the options were discussed extensively, with pros and cons: https://lore.kernel.org/netdev/20220816222920.1952936-3-vladimir.oltean@nxp.com/ So the 802.3 portion got submitted separately and finally was accepted: https://lore.kernel.org/netdev/20230119122705.73054-1-vladimir.oltean@nxp.com/ leaving the only remaining question: how do we expose the 802.1Q bits? This series proposes that we use the Qdisc layer, through separate (albeit very similar) UAPI in mqprio and taprio, and that both these Qdiscs pass the information down to the offloading device driver through the common mqprio offload structure (which taprio also passes). An implementation is provided for the NXP LS1028A on-board Ethernet endpoint (enetc). Previous versions also contained support for its embedded switch (felix), but this needs more work and will be submitted separately. v4: https://lore.kernel.org/netdev/20230403103440.2895683-1-vladimir.oltean@nxp.com/ v2: https://lore.kernel.org/netdev/20230219135309.594188-1-vladimir.oltean@nxp.com/ v1: https://lore.kernel.org/netdev/20230216232126.3402975-1-vladimir.oltean@nxp.com/ ==================== Link: https://lore.kernel.org/r/20230411180157.1850527-1-vladimir.oltean@nxp.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
PFs which support the MAC Merge layer also have a set of 8 registers called "Port traffic class N frame preemption register (PTC0FPR - PTC7FPR)". Through these, a traffic class (group of TX rings of same dequeue priority) can be mapped to the eMAC or to the pMAC. There's nothing particularly spectacular here. We should probably only commit the preemptible TCs to hardware once the MAC Merge layer became active, but unlike Felix, we don't have an IRQ that notifies us of that. We'd have to sleep for up to verifyTime (127 ms) to wait for a resolution coming from the verification state machine; not only from the ndo_setup_tc() code path, but also from enetc_mm_link_state_update(). Since it's relatively complicated and has a relatively small benefit, I'm not doing it. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ferenc Fejes <fejes@inf.elte.hu> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
To gain access to the larger encapsulating structure which has the type tc_mqprio_qopt_offload, rename just the "qopt" field as "qopt". Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ferenc Fejes <fejes@inf.elte.hu> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-