- 02 Jan, 2022 17 commits
-
-
Colin Foster authored
Remove references to lynx_pcs structures so drivers like the Felix DSA can reference alternate PCS drivers. Signed-off-by: Colin Foster <colin.foster@in-advantage.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christophe JAILLET authored
In [1], Christoph Hellwig has proposed to remove the wrappers in include/linux/pci-dma-compat.h. Some reasons why this API should be removed have been given by Julia Lawall in [2]. A coccinelle script has been used to perform the needed transformation Only relevant parts are given below. @@ expression e1, e2; @@ - pci_dma_mapping_error(e1, e2) + dma_mapping_error(&e1->dev, e2) [1]: https://lore.kernel.org/kernel-janitors/20200421081257.GA131897@infradead.org/ [2]: https://lore.kernel.org/kernel-janitors/alpine.DEB.2.22.394.2007120902170.2424@hadrien/Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christophe JAILLET authored
Use dma_set_mask_and_coherent() instead of unrolling it with some dma_set_mask()+dma_set_coherent_mask(). Moreover, as stated in [1], dma_set_mask() with a 64-bit mask will never fail if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. Simplify code and remove some dead code accordingly. Now that qed_set_coherency_mask() is mostly a single call to dma_set_mask_and_coherent(), fold it in its only caller. [1]: https://lkml.org/lkml/2021/6/7/398Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christophe JAILLET authored
Use dma_set_mask_and_coherent() instead of unrolling it with some dma_set_mask()+dma_set_coherent_mask(). Moreover, as stated in [1], dma_set_mask() with a 64-bit mask will never fail if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. That said, 'pci_using_dac' can only be 1 after a successful dma_set_mask_and_coherent(). Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christophe JAILLET authored
Use dma_set_mask_and_coherent() instead of unrolling it with some dma_set_mask()+dma_set_coherent_mask(). Moreover, as stated in [1], dma_set_mask() with a 64-bit mask will never fail if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. That said, 'pci_using_dac' can only be 1 after a successful dma_set_mask_and_coherent(). Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dust Li authored
Add comments for both smc_link_sendable() and smc_link_usable() to help better distinguish and use them. No function changes. Signed-off-by: Dust Li <dust.li@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christophe JAILLET authored
Use dma_set_mask_and_coherent() instead of unrolling it with some dma_set_mask()+dma_set_coherent_mask(). Moreover, as stated in [1], dma_set_mask_and_coherent() with a 64-bit mask will never fail if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. That said, 'pci_using_dac' can only be 1 after a successful dma_set_mask_and_coherent(). Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christophe JAILLET authored
Use dma_set_mask_and_coherent() instead of unrolling it with some dma_set_mask()+dma_set_coherent_mask(). This simplifies code and removes some dead code (dma_set_coherent_mask() can not fail after a successful dma_set_mask()) Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hamish MacDonald authored
Removed spaces and added a tab that was causing an error on checkpatch Signed-off-by: Hamish MacDonald <elusivenode@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Justin Iurman authored
v3: - Report 'backlog' (bytes) instead of 'qlen' (number of packets) v2: - Fix sparse warning (use rcu_dereference) This patch adds support for the queue depth in IOAM trace data fields. The draft [1] says the following: The "queue depth" field is a 4-octet unsigned integer field. This field indicates the current length of the egress interface queue of the interface from where the packet is forwarded out. The queue depth is expressed as the current amount of memory buffers used by the queue (a packet could consume one or more memory buffers, depending on its size). An existing function (i.e., qdisc_qstats_qlen_backlog) is used to retrieve the current queue length without reinventing the wheel. Note: it was tested and qlen is increasing when an artificial delay is added on the egress with tc. [1] https://datatracker.ietf.org/doc/html/draft-ietf-ippm-ioam-data#section-5.4.2.7Signed-off-by: Justin Iurman <justin.iurman@uliege.be> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
The pointer link is being re-assigned the same value that it was initialized with in the previous declaration statement. The re-assignment is redundant and can be removed. Fixes: 387707fd ("net/smc: convert static link ID to dynamic references") Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Reviewed-by: Tony Lu <tonylu@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tony Lu authored
This implements TCP ULP for SMC, helps applications to replace TCP with SMC protocol in place. And we use it to implement transparent replacement. This replaces original TCP sockets with SMC, reuse TCP as clcsock when calling setsockopt with TCP_ULP option, and without any overhead. To replace TCP sockets with SMC, there are two approaches: - use setsockopt() syscall with TCP_ULP option, if error, it would fallback to TCP. - use BPF prog with types BPF_CGROUP_INET_SOCK_CREATE or others to replace transparently. BPF hooks some points in create socket, bind and others, users can inject their BPF logics without modifying their applications, and choose which connections should be replaced with SMC by calling setsockopt() in BPF prog, based on rules, such as TCP tuples, PID, cgroup, etc... BPF doesn't support calling setsockopt with TCP_ULP now, I will send the patches after this accepted. Signed-off-by: Tony Lu <tonylu@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Tony Lu says: ==================== RDMA device net namespace support for SMC This patch set introduces net namespace support for linkgroups. Path 1 is the main approach to implement net ns support. Path 2 - 4 are the additional modifications to let us know the netns. Also, I will submit changes of smc-tools to github later. Currently, smc doesn't support net namespace isolation. The ibdevs registered to smc are shared for all linkgroups and connections. When running applications in different net namespaces, such as container environment, applications should only use the ibdevs that belongs to the same net namespace. This adds a new field, net, in smc linkgroup struct. During first contact, it checks and find the linkgroup has same net namespace, if not, it is going to create and initialized the net field with first link's ibdev net namespace. When finding the rdma devices, it also checks the sk net device's and ibdev's net namespaces. After net namespace destroyed, the net device and ibdev move to root net namespace, linkgroups won't be matched, and wait for lgr free. If rdma net namespace exclusive mode is not enabled, it behaves as before. Steps to enable and test net namespaces: 1. enable RDMA device net namespace exclusive support rdma system set netns exclusive # default is shared 2. create new net namespace, move and initialize them ip netns add test1 rdma dev set mlx5_1 netns test1 ip link set dev eth2 netns test1 ip netns exec test1 ip link set eth2 up ip netns exec test1 ip addr add ${HOST_IP}/26 dev eth2 3. setup server and client, connect N <-> M ip netns exec test1 smc_run sockperf server --tcp # server ip netns exec test1 smc_run sockperf pp --tcp -i ${SERVER_IP} # client 4. netns isolated linkgroups (2 * 2 mesh) with their own linkgroups - server LG-ID LG-Role LG-Type VLAN #Conns PNET-ID 00000100 SERV SINGLE 0 0 00000200 SERV SINGLE 0 0 00000300 SERV SINGLE 0 0 00000400 SERV SINGLE 0 0 - client LG-ID LG-Role LG-Type VLAN #Conns PNET-ID 00000100 CLNT SINGLE 0 0 00000200 CLNT SINGLE 0 0 00000300 CLNT SINGLE 0 0 00000400 CLNT SINGLE 0 0 ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tony Lu authored
This prints net namespace ID, helps us to distinguish different net namespaces when using tracepoints. Signed-off-by: Tony Lu <tonylu@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tony Lu authored
This adds net namespace ID to the kernel log, net_cookie is unique in the whole system. It is useful in container environment. Signed-off-by: Tony Lu <tonylu@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tony Lu authored
This adds net namespace ID to diag of linkgroup, helps us to distinguish different namespaces, and net_cookie is unique in the whole system. Signed-off-by: Tony Lu <tonylu@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tony Lu authored
Currently, rdma device supports exclusive net namespace isolation, however linkgroup doesn't know and support ibdev net namespace. Applications in the containers don't want to share the nics if we enabled rdma exclusive mode. Every net namespaces should have their own linkgroups. This patch introduce a new field net for linkgroup, which is standing for the ibdev net namespace in the linkgroup. The net in linkgroup is initialized with the net namespace of link's ibdev. It compares the net of linkgroup and sock or ibdev before choose it, if no matched, create new one in current net namespace. If rdma net namespace exclusive mode is not enabled, it behaves as before. Signed-off-by: Tony Lu <tonylu@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 31 Dec, 2021 23 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller authored
Alexei Starovoitov says: ==================== pull-request: bpf-next 2021-12-30 The following pull-request contains BPF updates for your *net-next* tree. We've added 72 non-merge commits during the last 20 day(s) which contain a total of 223 files changed, 3510 insertions(+), 1591 deletions(-). The main changes are: 1) Automatic setrlimit in libbpf when bpf is memcg's in the kernel, from Andrii. 2) Beautify and de-verbose verifier logs, from Christy. 3) Composable verifier types, from Hao. 4) bpf_strncmp helper, from Hou. 5) bpf.h header dependency cleanup, from Jakub. 6) get_func_[arg|ret|arg_cnt] helpers, from Jiri. 7) Sleepable local storage, from KP. 8) Extend kfunc with PTR_TO_CTX, PTR_TO_MEM argument support, from Kumar. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linuxDavid S. Miller authored
Saeed Mahameed says: ==================== mlx5 Software steering, New features and optimizations This patch series brings various SW steering features, optimizations and debug-ability focused improvements. 1) Expose debugfs for dumping the SW steering resources 2) Removing unused fields 3) support for matching on new fields 4) steering optimization for RX/TX-only rules 5) Make Software steering the default steering mechanism when available, applies only to Switchdev mode FDB From Yevgeny Kliteynik and Muhammad Sammar: - Patch 1 fixes an error flow in creating matchers - Patch 2 fix lower case macro prefix "mlx5_" to "MLX5_" - Patch 3 removes unused struct member in mlx5dr_matcher - Patch 4 renames list field in matcher struct to list_node to reflect the fact that is field is for list node that is stored on another struct's lists - Patch 5 adds checking for valid Flex parser ID value - Patch 6 adds the missing reserved fields to dr_match_param and aligns it to the format that is defined by HW spec - Patch 7 adds support for dumping SW steering (SMFS) resources using debugfs in CSV format: domain and its tables, matchers and rules - Patch 8 adds support for a new destination type - UPLINK - Patch 9 adds WARN_ON_ONCE on refcount checks in SW steering object destructors - Patches 10, 11, 12 add misc5 flow table match parameters and add support for matching on tunnel headers 0 and 1 - Patch 13 adds support for matching on geneve_tlv_option_0_exist field - Patch 14 implements performance optimization for for empty or RX/TX-only matchers by splitting RX and TX matchers handling: matcher connection in the matchers chain is split into two separate lists (RX only and TX only), which solves a usecase of many RX or TX only rules that create a long chain of RX/TX-only paths w/o the actual rules - Patch 15 ignores modify TTL if device doesn't support it instead of adding and unsupported action - Patch 16 sets SMFS as a default steering mode ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Guangbin Huang says: ==================== net: hns3: refactor cmdq functions in PF/VF Currently, hns3 PF and VF module have two sets of cmdq APIs to provide cmdq message interaction functions. Most of these APIs are the same. The only differences are the function variables and names with pf and vf suffixes. These two sets of cmdq APIs are redundent and add extra bug fix work. This series refactor the cmdq APIs in hns3 PF and VF by implementing one set of common cmdq APIs for PF and VF reuse and deleting the old APIs. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jie Wang authored
currently most cmdq APIs are unified in hclge_comm_cmd.c. Newly developed cmdq APIs should also be placed in hclge_comm_cmd.c. So there is no need to keep hclge_cmd.c and hclgevf_cmd.c. This patch moves the hclge(vf)_cmd_send to hclge(vf)_main.c and deletes the source files and makefile scripts. Signed-off-by: Jie Wang <wangjie125@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jie Wang authored
This patch uses common cmdq init and uninit APIs to replace the old APIs in VF cmdq module init and uninit module. Then the old VF init and uninit APIs is deleted. Signed-off-by: Jie Wang <wangjie125@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jie Wang authored
This patch uses common cmdq init and uninit APIs to replace the old APIs in PF cmdq module init and uninit modules. Then the old PF init and uninit APIs is deleted. Signed-off-by: Jie Wang <wangjie125@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jie Wang authored
The PF and VF cmdq init and uninit APIs are also almost same espect the suffixes of API names. This patch creates common cmdq init and uninit APIs needed by PF and VF cmdq modules. The next patch will use the new unified APIs to replace init and uninit APIs in PF module. Signed-off-by: Jie Wang <wangjie125@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jie Wang authored
This patch uses common cmdq resource allocate/free/query APIs to replace the old APIs in VF cmdq module and deletes the old cmdq resource APIs. Still we kept hclgevf_cmd_setup_basic_desc name as a seam API to avoid too many meaningless replacement. Signed-off-by: Jie Wang <wangjie125@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jie Wang authored
This patch uses common cmdq resource allocate/free/query APIs to replace the old APIs in PF cmdq module and deletes the old cmdq resource APIs. Still we kept hclge_cmd_setup_basic_desc name as a seam API to avoid too many meaningless replacement. Signed-off-by: Jie Wang <wangjie125@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jie Wang authored
The PF and VF cmdq module resource allocate/free/query APIs are almost the same espect the suffixes of API names. These same implementations bring double development and bugfix work. This patch creates common cmdq resource allocate/free/query APIs called by PF and VF cmdq init/uninit APIs. The next patch will use the new unified APIs to replace init/uninit APIs. Signed-off-by: Jie Wang <wangjie125@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jie Wang authored
This patch firstly uses new hardware description struct hclge_comm_hw as child member of hclgevf_hw and deletes the old hardware description child members. All the hclgevf_hw variables used in VF module is modified according to the new hclgevf_hw. Secondly hclgevf_cmd_send is refactored to use hclge_comm_cmd_send APIs. The old functions called by hclgevf_cmd_send are all deleted. Still we kept hclgevf_cmd_send to avoid too many meaningless modifications. Signed-off-by: Jie Wang <wangjie125@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jie Wang authored
This patch firstly uses new hardware description struct hclge_comm_hw as child member of hclge_hw and deletes the original child memebers of hclge_hw. All the hclge_hw variables used in PF module is modified according to the new hclge_hw. Secondly hclge_cmd_send is refactored to use hclge_comm_cmd_send APIs. The old functions called by hclge_cmd_send are deleted and hclge_cmd_send is kept to avoid too many meaningless modifications. Signed-off-by: Jie Wang <wangjie125@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jie Wang authored
This patch create new set of unified hclge_comm_cmd_send APIs for PF and VF cmdq module. Subfunctions called by hclge_comm_cmd_send are also created include cmdq result check, cmdq return code conversion and ring space opertaion APIs. These new common cmdq APIs will be used to replace the old PF and VF cmdq APIs in next patches. Signed-off-by: Jie Wang <wangjie125@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jie Wang authored
This patch use new common struct hclge_desc to replace struct hclgevf_desc in VF cmdq module and then delete the old struct hclgevf_desc. Signed-off-by: Jie Wang <wangjie125@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jie Wang authored
Currently PF and VF cmdq APIs use struct hclge(vf)_hw to describe cmdq hardware information needed by hclge(vf)_cmd_send. There are a little differences between its child struct hclge_cmq_ring and hclgevf_cmq_ring. It is redundent to use two sets of structures to support same functions. So this patch creates new set of common cmdq hardware description structures(hclge_comm_hw) to unify PF and VF cmdq functions. The struct hclge_desc is still kept to avoid too many meaningless replacement. These new structures will be used to unify hclge(vf)_hw structures in PF and VF cmdq APIs in next patches. Signed-off-by: Jie Wang <wangjie125@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jie Wang authored
Currently we plan to refactor PF and VF cmdq module. A new file folder hns3_common will be created to store new common APIs used by PF and VF cmdq module. Thus the PF and VF compilation process will both depends on the hns3_common. This may cause parallel building problems if we add a new makefile building unit. So this patch combined the PF and VF makefile scripts to the top level makefile to support the new hns3_common which will be created in the next patch. Signed-off-by: Jie Wang <wangjie125@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yevgeny Kliteynik authored
Set SMFS (SW-managed flow steering) as a default steering mode instead of DMFS (device-managed flow steering) In SMFS, the driver writes the STEs (Steering Table Entries) directly to the device's ICM, which allows for a higher rule insertion rate than through using FW command interface, as it is done in DMFS. SMFS/DMFS steering modes can be configured through devlink param 'flow_steering_mode'. The possible values are 'smfs' or 'dmfs'. The desired 'flow_steering_mode' param value should be set before enabling switchdev mode. Example: # devlink dev param set pci/0000:05:00.0 name flow_steering_mode smfs # devlink dev eswitch set pci/0000:05:00.0 mode switchdev Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
-
Yevgeny Kliteynik authored
When modifying TTL, packet's csum has to be recalculated. Due to HW issue in ConnectX-5, csum recalculation for modify TTL is supported through a work-around that is specifically enabled by configuration. If the work-around isn't enabled, ignore the modify TTL action rather than adding an unsupported action. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
-
Yevgeny Kliteynik authored
Every matcher has RX and TX paths. When a new matcher is created, its RX and TX start/end anchors are connected to the respective RX and TX anchors of the previous and next matchers. This creates a potential performance issue: when a certain rule is added to a matcher, in many cases it is RX or TX only rule, which may create a long chain of RX/TX-only paths w/o the actual rules. This patch aims to handle this issue. RX and TX matchers are now handled separately: matcher connection in the matchers chain is split into two separate lists: RX only and TX only. when a new matcher is created, it is initially created 'detached' - its RX/TX members are not inserted into the table's matcher list. When an actual rule is added, only its appropriate RX or TX nic matchers are then added to the table's nic matchers list and inserted into its place in the chain of matchers. I.e., if the rule that is being added is an RX-only rule, only the RX part of the matcher will be connected to the chain, while TX part of the matcher remains detached and doesn't prolong the TX chain of the matchers. Same goes for rule deletion: when the last RX/TX rule of the nic matcher is destroyed, the nic matcher is removed from its list. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
-
Yevgeny Kliteynik authored
Match on geneve_tlv_option_0_exist field on devices that support STEv1. Signed-off-by: Muhammad Sammar <muhammads@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
-
Muhammad Sammar authored
Tunnel headers are generic encapsulation headers, applies for all tunneling protocols identified by the device native parser or by the programmable parser, this support will enable raw matching headers 0 and 1. Signed-off-by: Muhammad Sammar <muhammads@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
-
Muhammad Sammar authored
Add misc5 match params to enable matching tunnel headers. Signed-off-by: Muhammad Sammar <muhammads@nvidia.com>
-
Muhammad Sammar authored
Add support for misc5 match parameter as per HW spec, this will allow matching on tunnel_header fields. Signed-off-by: Muhammad Sammar <muhammads@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
-