- 18 Jul, 2020 3 commits
-
-
Zhang Changzhong authored
Gcc report warning as follows: drivers/net/ethernet/agere/et131x.c:953:6: warning: variable 'pm_csr' set but not used [-Wunused-but-set-variable] 953 | u32 pm_csr; | ^~~~~~ drivers/net/ethernet/agere/et131x.c:1002:6:warning: variable 'pm_csr' set but not used [-Wunused-but-set-variable] 1002 | u32 pm_csr; | ^~~~~~ drivers/net/ethernet/agere/et131x.c:3446:8: warning: variable 'pm_csr' set but not used [-Wunused-but-set-variable] 3446 | u32 pm_csr; | ^~~~~~ After commit 38df6492 ("et131x: Add PCIe gigabit ethernet driver et131x to drivers/net"), 'pm_csr' is never used in these functions, so removing it to avoid build warning. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Zhang Changzhong <zhangchangzhong@huawei.com> Acked-by: Mark Einon <mark.einon@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Zhang Changzhong authored
Gcc report warning as follows: drivers/net/ethernet/brocade/bna/bfa_ioc.c:1538:6: warning: variable 't' set but not used [-Wunused-but-set-variable] 1538 | u32 t; | ^ After commit c107ba17 ("bna: Firmware Patch Simplification"), 't' is never used, so removing it to avoid build warning. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Zhang Changzhong <zhangchangzhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
The fact that NETIF_F_HW_TC is not set should be a sufficient indication to the user that TC offloads are not supported. No need to bother users of older firmware versions with pointless warnings on every boot. Also, since the support is optional, bnxt_init_tc() should not return an error in case FW is old, similarly to how error is not returned when CONFIG_BNXT_FLOWER_OFFLOAD is not set. With that we can add an error message to the caller, to warn about actual unexpected failures. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 17 Jul, 2020 19 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linuxDavid S. Miller authored
Saeed Mahameed says: ==================== mlx5-updates-2020-07-16 Fixes: 1) Fix build break when CONFIG_XPS is not set 2) Fix missing switch_id for representors Updates: 1) IPsec XFRM RX offloads from Raed and Huy. - Added IPSec RX steering flow tables to NIC RX - Refactoring of the existing FPGA IPSec, to add support for ConnectX IPsec. - RX data path handling for IPSec traffic - Synchronize offloading device ESN with xfrm received SN 2) Parav allows E-Switch to siwtch to switchdev mode directly without the need to go through legacy mode first. 3) From Tariq, Misc updates including: 3.1) indirect calls for RX and XDP handlers 3.2) Make MLX5_EN_TLS non-prompt as it should always be enabled when TLS and MLX5_EN are selected. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christophe JAILLET authored
Avoid a memset after a call to 'dma_alloc_coherent()'. This is useless since commit 518a2f19 ("dma-mapping: zero memory returned from dma_alloc_*") Replace a kmalloc+memset with a corresponding kzalloc. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christophe JAILLET authored
The wrappers in include/linux/pci-dma-compat.h should go away. The patch has been generated with the coccinelle script below and has been hand modified to replace GFP_ with a correct flag. It has been compile tested. When memory is allocated in 'ace_allocate_descriptors()' and 'ace_init()' GFP_KERNEL can be used because both functions are called from the probe function and no lock is acquired. @@ @@ - PCI_DMA_BIDIRECTIONAL + DMA_BIDIRECTIONAL @@ @@ - PCI_DMA_TODEVICE + DMA_TO_DEVICE @@ @@ - PCI_DMA_FROMDEVICE + DMA_FROM_DEVICE @@ @@ - PCI_DMA_NONE + DMA_NONE @@ expression e1, e2, e3; @@ - pci_alloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3; @@ - pci_zalloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3, e4; @@ - pci_free_consistent(e1, e2, e3, e4) + dma_free_coherent(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_single(e1, e2, e3, e4) + dma_map_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_single(e1, e2, e3, e4) + dma_unmap_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4, e5; @@ - pci_map_page(e1, e2, e3, e4, e5) + dma_map_page(&e1->dev, e2, e3, e4, e5) @@ expression e1, e2, e3, e4; @@ - pci_unmap_page(e1, e2, e3, e4) + dma_unmap_page(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_sg(e1, e2, e3, e4) + dma_map_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_sg(e1, e2, e3, e4) + dma_unmap_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_cpu(e1, e2, e3, e4) + dma_sync_single_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_device(e1, e2, e3, e4) + dma_sync_single_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_cpu(e1, e2, e3, e4) + dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_device(e1, e2, e3, e4) + dma_sync_sg_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2; @@ - pci_dma_mapping_error(e1, e2) + dma_mapping_error(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_dma_mask(e1, e2) + dma_set_mask(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_consistent_dma_mask(e1, e2) + dma_set_coherent_mask(&e1->dev, e2) Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christophe JAILLET authored
The wrappers in include/linux/pci-dma-compat.h should go away. The patch has been generated with the coccinelle script below and has been hand modified to replace GFP_ with a correct flag. It has been compile tested. When memory is allocated in 'gem_init_one()', GFP_KERNEL can be used because it is a probe function and no lock is acquired. @@ @@ - PCI_DMA_BIDIRECTIONAL + DMA_BIDIRECTIONAL @@ @@ - PCI_DMA_TODEVICE + DMA_TO_DEVICE @@ @@ - PCI_DMA_FROMDEVICE + DMA_FROM_DEVICE @@ @@ - PCI_DMA_NONE + DMA_NONE @@ expression e1, e2, e3; @@ - pci_alloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3; @@ - pci_zalloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3, e4; @@ - pci_free_consistent(e1, e2, e3, e4) + dma_free_coherent(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_single(e1, e2, e3, e4) + dma_map_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_single(e1, e2, e3, e4) + dma_unmap_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4, e5; @@ - pci_map_page(e1, e2, e3, e4, e5) + dma_map_page(&e1->dev, e2, e3, e4, e5) @@ expression e1, e2, e3, e4; @@ - pci_unmap_page(e1, e2, e3, e4) + dma_unmap_page(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_sg(e1, e2, e3, e4) + dma_map_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_sg(e1, e2, e3, e4) + dma_unmap_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_cpu(e1, e2, e3, e4) + dma_sync_single_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_device(e1, e2, e3, e4) + dma_sync_single_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_cpu(e1, e2, e3, e4) + dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_device(e1, e2, e3, e4) + dma_sync_sg_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2; @@ - pci_dma_mapping_error(e1, e2) + dma_mapping_error(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_dma_mask(e1, e2) + dma_set_mask(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_consistent_dma_mask(e1, e2) + dma_set_coherent_mask(&e1->dev, e2) Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Suraj Upadhyay authored
Replace goto loop with while loop. Signed-off-by: Suraj Upadhyay <usuraj35@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Priyaranjan Jha says: ==================== tcp: improve handling of DSACK covering multiple segments Currently, while processing DSACK, we assume DSACK covers only one segment. This leads to significant underestimation of no. of duplicate segments with LRO/GRO. Also, the existing SNMP counters, TCPDSACKRecv and TCPDSACKOfoRecv, make similar assumption for DSACK, which makes them unusable for estimating spurious retransmit rates. This patch series fixes the segment accounting with DSACK, by estimating number of duplicate segments based on: (DSACKed sequence range) / MSS. It also introduces a new SNMP counter, TCPDSACKRecvSegs, which tracks the estimated number of duplicate segments. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Priyaranjan Jha authored
There are two existing SNMP counters, TCPDSACKRecv and TCPDSACKOfoRecv, which are incremented depending on whether the DSACKed range is below the cumulative ACK sequence number or not. Unfortunately, these both implicitly assume each DSACK covers only one segment. This makes these counters unusable for estimating spurious retransmit rates, or real/non-spurious loss rate. This patch introduces a new SNMP counter, TCPDSACKRecvSegs, which tracks the estimated number of duplicate segments based on: (DSACKed sequence range) / MSS. This counter is usable for estimating spurious retransmit rates, or real/non-spurious loss rate. Signed-off-by: Priyaranjan Jha <priyarjha@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Priyaranjan Jha authored
Currently, while processing DSACK, we assume DSACK covers only one segment. This leads to significant underestimation of DSACKs with LRO/GRO. This patch fixes segment accounting with DSACK by estimating segment count from DSACK sequence range / MSS. Signed-off-by: Priyaranjan Jha <priyarjha@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Yousuk Seung <ysseung@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christophe JAILLET authored
The wrappers in include/linux/pci-dma-compat.h should go away. The patch has been generated with the coccinelle script below and has been hand modified to replace GFP_ with a correct flag. It has been compile tested. When memory is allocated in 'cas_tx_tiny_alloc()', GFP_KERNEL can be used because a few lines below in its only caller, 'cas_alloc_rxds()', is also called. This function makes an explicit use of GFP_KERNEL. When memory is allocated in 'cas_init_one()', GFP_KERNEL can be used because it is a probe function and no lock is acquired. @@ @@ - PCI_DMA_BIDIRECTIONAL + DMA_BIDIRECTIONAL @@ @@ - PCI_DMA_TODEVICE + DMA_TO_DEVICE @@ @@ - PCI_DMA_FROMDEVICE + DMA_FROM_DEVICE @@ @@ - PCI_DMA_NONE + DMA_NONE @@ expression e1, e2, e3; @@ - pci_alloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3; @@ - pci_zalloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3, e4; @@ - pci_free_consistent(e1, e2, e3, e4) + dma_free_coherent(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_single(e1, e2, e3, e4) + dma_map_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_single(e1, e2, e3, e4) + dma_unmap_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4, e5; @@ - pci_map_page(e1, e2, e3, e4, e5) + dma_map_page(&e1->dev, e2, e3, e4, e5) @@ expression e1, e2, e3, e4; @@ - pci_unmap_page(e1, e2, e3, e4) + dma_unmap_page(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_sg(e1, e2, e3, e4) + dma_map_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_sg(e1, e2, e3, e4) + dma_unmap_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_cpu(e1, e2, e3, e4) + dma_sync_single_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_device(e1, e2, e3, e4) + dma_sync_single_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_cpu(e1, e2, e3, e4) + dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_device(e1, e2, e3, e4) + dma_sync_sg_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2; @@ - pci_dma_mapping_error(e1, e2) + dma_mapping_error(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_dma_mask(e1, e2) + dma_set_mask(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_consistent_dma_mask(e1, e2) + dma_set_coherent_mask(&e1->dev, e2) Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Davide Caratti authored
since commit d47a7215 ("mptcp: fix race in subflow_data_ready()"), it is possible to observe a regression in MP_JOIN kselftests. For sockets in TCP_CLOSE state, it's not sufficient to just wake up the main socket: we also need to ensure that received data are made available to the reader. Silence the WARN_ON_ONCE() in these cases: it preserves the syzkaller fix and restores kselftests when they are ran as follows: # while true; do > make KBUILD_OUTPUT=/tmp/kselftest TARGETS=net/mptcp kselftest > done Reported-by: Florian Westphal <fw@strlen.de> Fixes: d47a7215 ("mptcp: fix race in subflow_data_ready()") Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/47Signed-off-by: Davide Caratti <dcaratti@redhat.com> Reviewed-by: Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Bjørn Mork says: ==================== usbnet: multicast filter support for cdc ncm devices This revives a 2 year old patch set from Miguel Rodríguez Pérez, which appears to have been lost somewhere along the way. I've based it on the last version I found (v4), and added one patch which I believe must have been missing in the original. I kept Oliver's ack on one of the patches, since both the patch and the motivation still is the same. Hope this is OK.. Thanks to the anonymous user <wxcafe@wxcafe.net> for bringing up this problem in https://bugs.debian.org/965074 This is only build and load tested by me. I don't have any device where I can test the actual functionality. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Miguel Rodríguez Pérez authored
We set set_rx_mode to usbnet_cdc_update_filter provided by cdc_ether that simply admits all multicast traffic if there is more than one multicast filter configured. Signed-off-by: Miguel Rodríguez Pérez <miguel@det.uvigo.gal> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Miguel Rodríguez Pérez authored
The cdc_ncm driver overrides the net_device_ops structure used by usbnet to be able to hook into .ndo_change_mtu. However, the structure was missing the .ndo_set_rx_mode field, preventing the driver from hooking into usbnet's set_rx_mode. This patch adds the missing callback to usbnet_set_rx_mode in net_device_ops. Signed-off-by: Miguel Rodríguez Pérez <miguel@det.uvigo.gal> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
This function can be reused by other usbnet minidrivers. Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Miguel Rodríguez Pérez authored
This makes the function available to other drivers, like cdc_ncm. Signed-off-by: Miguel Rodríguez Pérez <miguel@det.uvigo.gal> Acked-by: Oliver Neukum <oneukum@suse.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Miguel Rodríguez Pérez authored
usbnet_cdc_update_filter was getting the interface number from the usb_interface struct in cdc_state->control. However, cdc_ncm does not initialize that structure in its bind function, but uses cdc_ncm_ctx instead. Getting intf directly from struct usbnet solves the problem. Signed-off-by: Miguel Rodríguez Pérez <miguel@det.uvigo.gal> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eelco Chaudron authored
This patch reorders the masks array every 4 seconds based on their usage count. This greatly reduces the masks per packet hit, and hence the overall performance. Especially in the OVS/OVN case for OpenShift. Here are some results from the OVS/OVN OpenShift test, which use 8 pods, each pod having 512 uperf connections, each connection sends a 64-byte request and gets a 1024-byte response (TCP). All uperf clients are on 1 worker node while all uperf servers are on the other worker node. Kernel without this patch : 7.71 Gbps Kernel with this patch applied: 14.52 Gbps We also run some tests to verify the rebalance activity does not lower the flow insertion rate, which does not. Signed-off-by: Eelco Chaudron <echaudro@redhat.com> Tested-by: Andrew Theurer <atheurer@redhat.com> Reviewed-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Chris Healy authored
Some Cotsworks SFF have invalid data in the first few bytes of the module EEPROM. This results in these modules not being detected as valid modules. Address this by poking the correct EEPROM values into the module EEPROM when the model/PN match and the existing module EEPROM contents are not correct. Signed-off-by: Chris Healy <cphealy@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
At the time of introduction, in commit bdeced75 ("net: dsa: felix: Add PCS operations for PHYLINK"), support for the Lynx PCS inside Felix was relying, for USXGMII support, on the fact that get_phy_device() is able to parse the Lynx PCS "device-in-package" registers for this C45 MDIO device and identify it correctly. However, this was actually working somewhat by mistake (in the sense that, even though it was detected, it was detected for the wrong reasons). The get_phy_c45_ids() function works by iterating through all MMDs starting from 1 (MDIO_MMD_PMAPMD) and stops at the first one which returns a non-zero value in the "device-in-package" register pair, proceeding to see what that non-zero value is. For the Felix PCS, the first MMD (1, for the PMA/PMD) returns a non-zero value of 0xffffffff in the "device-in-package" registers. There is a code branch which is supposed to treat this case and flag it as wrong, and normally, this would have caught my attention when adding initial support for this PCS: if ((devs_in_pkg & 0x1fffffff) == 0x1fffffff) { /* If mostly Fs, there is no device there, then let's probe * MMD 0, as some 10G PHYs have zero Devices In package, * e.g. Cortina CS4315/CS4340 PHY. */ However, this code never actually kicked in, it seems, because this snippet from get_phy_c45_devs_in_pkg() was basically sabotaging itself, by returning 0xfffffffe instead of 0xffffffff: /* Bit 0 doesn't represent a device, it indicates c22 regs presence */ *devices_in_package &= ~BIT(0); Then the rest of the code just carried on thinking "ok, MMD 1 (PMA/PMD) says that there are 31 devices in that package, each having a device id of ffff:ffff, that's perfectly fine, let's go ahead and probe this PHY device". But after cleanup commit 320ed3bf ("net: phy: split devices_in_package"), this got "fixed", and now devs_in_pkg is no longer 0xfffffffe, but 0xffffffff. So now, get_phy_device is returning -ENODEV for the Lynx PCS, because the semantics have remained mostly unchanged: the loop stops at the first MMD that returns a non-zero value, and that is MMD 1. But the Lynx PCS is simply a clause 37 PCS which implements the required MAC-side functionality for USXGMII (when operated in C45 mode, which is where C45 devices-in-package detection is relevant to). Of course it will fail the PMD/PMA test (MMD 1), since it is not a PHY. But it does implement detection for MDIO_MMD_PCS (3): - MDIO_DEVS1=0x008a, MDIO_DEVS2=0x0000, - MDIO_DEVID1=0x0083, MDIO_DEVID2=0xe400 Let get_phy_c45_ids() continue searching for valid MMDs, and don't assume that every phy_device has a PMA/PMD MMD implemented. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 16 Jul, 2020 18 commits
-
-
Jakub Kicinski authored
Petr Machata says: ==================== net: sched: Do not drop root lock in tcf_qevent_handle() Mirred currently does not mix well with blocks executed after the qdisc root lock is taken. This includes classification blocks (such as in PRIO, ETS, DRR qdiscs) and qevents. The locking caused by the packet mirrored by mirred can cause deadlocks: either when the thread of execution attempts to take the lock a second time, or when two threads end up waiting on each other's locks. The qevent patchset attempted to not introduce further badness of this sort, and dropped the lock before executing the qevent block. However this lead to too little locking and races between qdisc configuration and packet enqueue in the RED qdisc. Before the deadlock issues are solved in a way that can be applied across many qdiscs reasonably easily, do for qevents what is done for the classification blocks and just keep holding the root lock. That is done in patch #1. Patch #2 then drops the now unnecessary root_lock argument from Qdisc_ops.enqueue. ==================== Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Petr Machata authored
This reverts commit aebe4426. Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Petr Machata authored
Mirred currently does not mix well with blocks executed after the qdisc root lock is taken. This includes classification blocks (such as in PRIO, ETS, DRR qdiscs) and qevents. The locking caused by the packet mirrored by mirred can cause deadlocks: either when the thread of execution attempts to take the lock a second time, or when two threads end up waiting on each other's locks. The qevent patchset attempted to not introduce further badness of this sort, and dropped the lock before executing the qevent block. However this lead to too little locking and races between qdisc configuration and packet enqueue in the RED qdisc. Before the deadlock issues are solved in a way that can be applied across many qdiscs reasonably easily, do for qevents what is done for the classification blocks and just keep holding the root lock. Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eli Britstein authored
The 128 bits ct_label field is matched using a 32 bit hardware register. As such, only the lower 32 bits of ct_label field are offloaded. Change this logic to support setting and matching higher bits too. Map the 128 bits data to a unique 32 bits ID. Matching is done as exact match of the mapping ID of key & mask. Signed-off-by: Eli Britstein <elibr@mellanox.com> Reviewed-by: Oz Shlomo <ozsh@mellanox.com> Reviewed-by: Roi Dayan <roid@mellanox.com> Reviewed-by: Maor Dickman <maord@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Tariq Toukan authored
UMR WQEs are posted in bulks, and HW is notified once per a bulk. Reduce the number of completions by requesting such only for the last WQE of the bulk. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Reviewed-by: Maxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Tariq Toukan authored
Use INDIRECT_CALL_2() helper to avoid the cost of the indirect call when/if CONFIG_RETPOLINE=y. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Reviewed-by: Maxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Tariq Toukan authored
Use INDIRECT_CALL_2() helper to avoid the cost of the indirect call when/if CONFIG_RETPOLINE=y. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Reviewed-by: Maxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Raed Salem authored
Synchronize offloading device ESN with xfrm received SN by updating an existing IPsec HW context with the new SN. Signed-off-by: Raed Salem <raeds@mellanox.com> Reviewed-by: Boris Pismenny <borisp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Raed Salem authored
On receive flow inspect received packets for IPsec offload indication using the cqe, for IPsec offloaded packets propagate offload status and stack handle to stack for further processing. Supported statuses: - Offload ok. - Authentication failure. - Bad trailer indication. Connect-X IPsec does not use mlx5e_ipsec_handle_rx_cqe. For RX only offload, we see the BW gain. Below is the iperf3 performance report on two server of 24 cores Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz with ConnectX6-DX. We use one thread per IPsec tunnel. --------------------------------------------------------------------- Mode | Num tunnel | BW | Send CPU util | Recv CPU util | | (Gbps) | (Average %) | (Average %) --------------------------------------------------------------------- Cryto offload | 1 | 4.6 | 4.2 | 14.5 --------------------------------------------------------------------- Cryto offload | 24 | 38 | 73 | 63 --------------------------------------------------------------------- Non-offload | 1 | 4 | 4 | 13 --------------------------------------------------------------------- Non-offload | 24 | 23 | 52 | 67 Signed-off-by: Raed Salem <raeds@mellanox.com> Reviewed-by: Boris Pismenny <borisp@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Huy Nguyen authored
Introduce decrypt FT, the RX error FT and the default rules. The IPsec RX decrypt flow table is pointed by the TTC (Traffic Type Classifier) ESP steering rules. The decrypt flow table has two flow groups. The first flow group keeps the decrypt steering rule programmed via the "ip xfrm s" interface. The second flow group has a default rule to forward all non-offloaded ESP packet to the TTC ESP default RSS TIR. The RX error flow table is the destination of the decrypt steering rules in the IPsec RX decrypt flow table. It has a fixed rule with single copy action that copies ipsec_syndrome to metadata_regB[0:6]. The IPsec syndrome is used to filter out non-ipsec packet and to return the IPsec crypto offload status in Rx flow. The destination of RX error flow table is the TTC ESP default RSS TIR. All the FTs (decrypt FT and error FT) are created only when IPsec SAs are added. If there is no IPsec SAs, the FTs are removed. Signed-off-by: Huy Nguyen <huyn@mellanox.com> Reviewed-by: Boris Pismenny <borisp@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Huy Nguyen authored
Add FTE actions IPsec ENCRYPT/DECRYPT Add ipsec_obj_id field in FTE Add new action field MLX5_ACTION_IN_FIELD_IPSEC_SYNDROME Signed-off-by: Huy Nguyen <huyn@mellanox.com> Reviewed-by: Raed Salem <raeds@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Raed Salem authored
This patch adds support for Connect-X IPsec crypto offload by implementing the IPsec acceleration layer needed routines, which delegates IPsec offloads to Connect-X routines. In Connect-X IPsec, a Security Association (SA) is added or deleted via allocating a HW context of an encryption/decryption key and a HW context of a matching SA (IPsec object). The Security Policy (SP) is added or deleted by creating matching Tx/Rx steering rules whith an action of encryption/decryption respectively, executed using the previously allocated SA HW context. When new xfrm state (SA) is added: - Use a separate crypto key HW context. - Create a separate IPsec context in HW to inlcude the SA properties: - aes-gcm salt. - ICV properties (ICV length, implicit IV). - on supported devices also update ESN. - associate the allocated crypto key with this IPsec context. Introduce a new compilation flag MLX5_IPSEC for it. Downstream patches will implement the Rx,Tx steering and will add the update esn. Signed-off-by: Raed Salem <raeds@mellanox.com> Signed-off-by: Huy Nguyen <huyn@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Raed Salem authored
This to set the base for downstream patches to support the new IPsec implementation of the Connect-X family. Following modifications made: - Remove accel layer dependency from MLX5_FPGA_IPSEC. - Introduce accel_ipsec_ops, each IPsec device will have to support these ops. Signed-off-by: Raed Salem <raeds@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Parav Pandit authored
Currently only ECPF allows enabling eswitch when SR-IOV is disabled. Enable PF also to enable eswitch when SR-IOV is disabled. Load VF vports when eswitch is already enabled. Signed-off-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Roi Dayan <roid@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Parav Pandit authored
for non ECPF eswitch manager function, vports are already enabled/disabled when eswitch is enabled/disabled respectively. Simplify function change handler for such eswitch manager function. Therefore, ECPF is the only one which remains PF/VF function change handler. Signed-off-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Roi Dayan <roid@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Tariq Toukan authored
TLS runs only over Eth, and the Eth driver is the only user of the core TLS functionality. There is no meaning of having the core functionality without the usage in Eth driver. Hence, let both TLS core implementations depend on MLX5_CORE_EN, and select MLX5_EN_TLS. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Reviewed-by: Raed Salem <raeds@mellanox.com> Reviewed-by: Boris Pismenny <borisp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Saeed Mahameed authored
mlx5e_accel_sk_get_rxq is only used in ktls_rx.c file which already depends on XPS to be compiled, move it from the generic en_accel.h header to be local in ktls_rx.c, to fix the below build break In file included from ../drivers/net/ethernet/mellanox/mlx5/core/en_main.c:49:0: ../drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h: In function ‘mlx5e_accel_sk_get_rxq’: ../drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h:153:12: error: implicit declaration of function ‘sk_rx_queue_get’ ... int rxq = sk_rx_queue_get(sk); ^~~~~~~~~~~~~~~ Fixes: 1182f365 ("net/mlx5e: kTLS, Add kTLS RX HW offload support") Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reported-by: Randy Dunlap <rdunlap@infradead.org>
-
Parav Pandit authored
Cited commit in fixes tag missed to set the switch id of the PF and VF ports. Due to this flow cannot be offloaded, a simple command like below fails to offload with below error. tc filter add dev ens2f0np0 parent ffff: prio 1 flower \ dst_mac 00:00:00:00:00:00/00:00:00:00:00:00 skip_sw \ action mirred egress redirect dev ens2f0np0pf0vf0 Error: mlx5_core: devices are not on same switch HW, can't offload forwarding. Hence, fix it by setting switch id for each PF and VF representors port as before the cited commit. Fixes: 71ad8d55 ("devlink: Replace devlink_port_attrs_set parameters with a struct") Signed-off-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Roi Dayan <roid@mellanox.com>
-