- 18 Jun, 2022 4 commits
-
-
Xiang wangx authored
Delete the redundant word 'the'. Signed-off-by: Xiang wangx <wangxiang@cdjrlc.com> Link: https://lore.kernel.org/r/20220616142624.3397-1-wangxiang@cdjrlc.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Yinjun Zhang authored
Show correct pause frame parameters for nfp. These parameters cannot be configured, so .set_pauseparam() is not implemented. With this change: #ethtool --show-pause enp1s0np0 Pause parameters for enp1s0np0: Autonegotiate: off RX: on TX: on Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com> Signed-off-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20220616133358.135305-1-simon.horman@corigine.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Oleksij Rempel authored
Rework MDIO locking to avoid potential circular locking: WARNING: possible circular locking dependency detected 5.19.0-rc1-ar9331-00017-g3ab364c7c48c #5 Not tainted ------------------------------------------------------ kworker/u2:4/68 is trying to acquire lock: 81f3c83c (ar9331:1005:(&ar9331_mdio_regmap_config)->lock){+.+.}-{4:4}, at: regmap_write+0x50/0x8c but task is already holding lock: 81f60494 (&bus->mdio_lock){+.+.}-{4:4}, at: mdiobus_read+0x40/0x78 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&bus->mdio_lock){+.+.}-{4:4}: lock_acquire+0x2d4/0x360 __mutex_lock+0xf8/0x384 mutex_lock_nested+0x2c/0x38 mdiobus_write+0x44/0x80 ar9331_sw_bus_write+0x50/0xe4 _regmap_raw_write_impl+0x604/0x724 _regmap_bus_raw_write+0x9c/0xb4 _regmap_write+0xdc/0x1a0 _regmap_update_bits+0xf4/0x118 _regmap_select_page+0x108/0x138 _regmap_raw_read+0x25c/0x288 _regmap_bus_read+0x60/0x98 _regmap_read+0xd4/0x1b0 _regmap_update_bits+0xc4/0x118 regmap_update_bits_base+0x64/0x8c ar9331_sw_irq_bus_sync_unlock+0x40/0x6c __irq_set_handler+0x7c/0xac ar9331_sw_irq_map+0x48/0x7c irq_domain_associate+0x174/0x208 irq_create_mapping_affinity+0x1a8/0x230 ar9331_sw_probe+0x22c/0x388 mdio_probe+0x44/0x70 really_probe+0x200/0x424 __driver_probe_device+0x290/0x298 driver_probe_device+0x54/0xe4 __device_attach_driver+0xe4/0x130 bus_for_each_drv+0xb4/0xd8 __device_attach+0x104/0x1a4 bus_probe_device+0x48/0xc4 device_add+0x600/0x800 mdio_device_register+0x68/0xa0 of_mdiobus_register+0x2bc/0x3c4 ag71xx_probe+0x6e4/0x984 platform_probe+0x78/0xd0 really_probe+0x200/0x424 __driver_probe_device+0x290/0x298 driver_probe_device+0x54/0xe4 __driver_attach+0x17c/0x190 bus_for_each_dev+0x8c/0xd0 bus_add_driver+0x110/0x228 driver_register+0xe4/0x12c do_one_initcall+0x104/0x2a0 kernel_init_freeable+0x250/0x288 kernel_init+0x34/0x130 ret_from_kernel_thread+0x14/0x1c -> #0 (ar9331:1005:(&ar9331_mdio_regmap_config)->lock){+.+.}-{4:4}: check_noncircular+0x88/0xc0 __lock_acquire+0x10bc/0x18bc lock_acquire+0x2d4/0x360 __mutex_lock+0xf8/0x384 mutex_lock_nested+0x2c/0x38 regmap_write+0x50/0x8c ar9331_sw_mbus_read+0x74/0x1b8 __mdiobus_read+0x90/0xec mdiobus_read+0x50/0x78 get_phy_device+0xa0/0x18c fwnode_mdiobus_register_phy+0x120/0x1d4 of_mdiobus_register+0x244/0x3c4 devm_of_mdiobus_register+0xe8/0x100 ar9331_sw_setup+0x16c/0x3a0 dsa_register_switch+0x7dc/0xcc0 ar9331_sw_probe+0x370/0x388 mdio_probe+0x44/0x70 really_probe+0x200/0x424 __driver_probe_device+0x290/0x298 driver_probe_device+0x54/0xe4 __device_attach_driver+0xe4/0x130 bus_for_each_drv+0xb4/0xd8 __device_attach+0x104/0x1a4 bus_probe_device+0x48/0xc4 deferred_probe_work_func+0xf0/0x10c process_one_work+0x314/0x4d4 worker_thread+0x2a4/0x354 kthread+0x134/0x13c ret_from_kernel_thread+0x14/0x1c other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&bus->mdio_lock); lock(ar9331:1005:(&ar9331_mdio_regmap_config)->lock); lock(&bus->mdio_lock); lock(ar9331:1005:(&ar9331_mdio_regmap_config)->lock); *** DEADLOCK *** 5 locks held by kworker/u2:4/68: #0: 81c04eb4 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x1e4/0x4d4 #1: 81f0de78 (deferred_probe_work){+.+.}-{0:0}, at: process_one_work+0x1e4/0x4d4 #2: 81f0a880 (&dev->mutex){....}-{4:4}, at: __device_attach+0x40/0x1a4 #3: 80c8aee0 (dsa2_mutex){+.+.}-{4:4}, at: dsa_register_switch+0x5c/0xcc0 #4: 81f60494 (&bus->mdio_lock){+.+.}-{4:4}, at: mdiobus_read+0x40/0x78 stack backtrace: CPU: 0 PID: 68 Comm: kworker/u2:4 Not tainted 5.19.0-rc1-ar9331-00017-g3ab364c7c48c #5 Workqueue: events_unbound deferred_probe_work_func Stack : 00000056 800d4638 81f0d64c 00000004 00000018 00000000 80a20000 80a20000 80937590 81ef3858 81f0d760 3913578a 00000005 8045e824 81f0d600 a8db84cc 00000000 00000000 80937590 00000a44 00000000 00000002 00000001 ffffffff 81f0d6a4 80982d7c 0000000f 20202020 80a20000 00000001 80937590 81ef3858 81f0d760 3913578a 00000005 00000005 00000000 03bd0000 00000000 80e00000 ... Call Trace: [<80069db0>] show_stack+0x94/0x130 [<8045e824>] dump_stack_lvl+0x54/0x8c [<800c7fac>] check_noncircular+0x88/0xc0 [<800ca068>] __lock_acquire+0x10bc/0x18bc [<800cb478>] lock_acquire+0x2d4/0x360 [<807b84c4>] __mutex_lock+0xf8/0x384 [<807b877c>] mutex_lock_nested+0x2c/0x38 [<804ea640>] regmap_write+0x50/0x8c [<80501e38>] ar9331_sw_mbus_read+0x74/0x1b8 [<804fe9a0>] __mdiobus_read+0x90/0xec [<804feac4>] mdiobus_read+0x50/0x78 [<804fcf74>] get_phy_device+0xa0/0x18c [<804ffeb4>] fwnode_mdiobus_register_phy+0x120/0x1d4 [<805004f0>] of_mdiobus_register+0x244/0x3c4 [<804f0c50>] devm_of_mdiobus_register+0xe8/0x100 [<805017a0>] ar9331_sw_setup+0x16c/0x3a0 [<807355c8>] dsa_register_switch+0x7dc/0xcc0 [<80501468>] ar9331_sw_probe+0x370/0x388 [<804ff0c0>] mdio_probe+0x44/0x70 [<804d1848>] really_probe+0x200/0x424 [<804d1cfc>] __driver_probe_device+0x290/0x298 [<804d1d58>] driver_probe_device+0x54/0xe4 [<804d2298>] __device_attach_driver+0xe4/0x130 [<804cf048>] bus_for_each_drv+0xb4/0xd8 [<804d200c>] __device_attach+0x104/0x1a4 [<804d026c>] bus_probe_device+0x48/0xc4 [<804d108c>] deferred_probe_work_func+0xf0/0x10c [<800a0ffc>] process_one_work+0x314/0x4d4 [<800a17fc>] worker_thread+0x2a4/0x354 [<800a9a54>] kthread+0x134/0x13c [<8006306c>] ret_from_kernel_thread+0x14/0x1c [ Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de> Link: https://lore.kernel.org/r/20220616112550.877118-1-o.rempel@pengutronix.deSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski authored
Daniel Borkmann says: ==================== pull-request: bpf-next 2022-06-17 We've added 72 non-merge commits during the last 15 day(s) which contain a total of 92 files changed, 4582 insertions(+), 834 deletions(-). The main changes are: 1) Add 64 bit enum value support to BTF, from Yonghong Song. 2) Implement support for sleepable BPF uprobe programs, from Delyan Kratunov. 3) Add new BPF helpers to issue and check TCP SYN cookies without binding to a socket especially useful in synproxy scenarios, from Maxim Mikityanskiy. 4) Fix libbpf's internal USDT address translation logic for shared libraries as well as uprobe's symbol file offset calculation, from Andrii Nakryiko. 5) Extend libbpf to provide an API for textual representation of the various map/prog/attach/link types and use it in bpftool, from Daniel Müller. 6) Provide BTF line info for RV64 and RV32 JITs, and fix a put_user bug in the core seen in 32 bit when storing BPF function addresses, from Pu Lehui. 7) Fix libbpf's BTF pointer size guessing by adding a list of various aliases for 'long' types, from Douglas Raillard. 8) Fix bpftool to readd setting rlimit since probing for memcg-based accounting has been unreliable and caused a regression on COS, from Quentin Monnet. 9) Fix UAF in BPF cgroup's effective program computation triggered upon BPF link detachment, from Tadeusz Struk. 10) Fix bpftool build bootstrapping during cross compilation which was pointing to the wrong AR process, from Shahab Vahedi. 11) Fix logic bug in libbpf's is_pow_of_2 implementation, from Yuze Chi. 12) BPF hash map optimization to avoid grabbing spinlocks of all CPUs when there is no free element. Also add a benchmark as reproducer, from Feng Zhou. 13) Fix bpftool's codegen to bail out when there's no BTF, from Michael Mullin. 14) Various minor cleanup and improvements all over the place. * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (72 commits) bpf: Fix bpf_skc_lookup comment wrt. return type bpf: Fix non-static bpf_func_proto struct definitions selftests/bpf: Don't force lld on non-x86 architectures selftests/bpf: Add selftests for raw syncookie helpers in TC mode bpf: Allow the new syncookie helpers to work with SKBs selftests/bpf: Add selftests for raw syncookie helpers bpf: Add helpers to issue and check SYN cookies in XDP bpf: Allow helpers to accept pointers with a fixed size bpf: Fix documentation of th_len in bpf_tcp_{gen,check}_syncookie selftests/bpf: add tests for sleepable (uk)probes libbpf: add support for sleepable uprobe programs bpf: allow sleepable uprobe programs to attach bpf: implement sleepable uprobes by chaining gps bpf: move bpf_prog to bpf.h libbpf: Fix internal USDT address translation logic for shared libraries samples/bpf: Check detach prog exist or not in xdp_fwd selftests/bpf: Avoid skipping certain subtests selftests/bpf: Fix test_varlen verification failure with latest llvm bpftool: Do not check return value from libbpf_set_strict_mode() Revert "bpftool: Use libbpf 1.0 API mode instead of RLIMIT_MEMLOCK" ... ==================== Link: https://lore.kernel.org/r/20220617220836.7373-1-daniel@iogearbox.netSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 17 Jun, 2022 36 commits
-
-
Tobias Klauser authored
The function no longer returns 'unsigned long' as of commit edbf8c01 ("bpf: add skc_lookup_tcp helper"). Signed-off-by: Tobias Klauser <tklauser@distanz.ch> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220617152121.29617-1-tklauser@distanz.ch
-
Joanne Koong authored
This patch does two things: 1) Marks the dynptr bpf_func_proto structs that were added in [1] as static, as pointed out by the kernel test robot in [2]. 2) There are some bpf_func_proto structs marked as extern which can instead be statically defined. [1] https://lore.kernel.org/bpf/20220523210712.3641569-1-joannelkoong@gmail.com/ [2] https://lore.kernel.org/bpf/62ab89f2.Pko7sI08RAKdF8R6%25lkp@intel.com/Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Joanne Koong <joannelkoong@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220616225407.1878436-1-joannelkoong@gmail.com
-
Hoang Le authored
tipc_dest_list_len() is not being called anywhere. Clean it up. Acked-by: Jon Maloy <jmaloy@redhat.com> Signed-off-by: Hoang Le <hoang.h.le@dektech.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Oleksij Rempel authored
JML register on probe will return zero . This register is configured later on macb_init_hw() which is called on open. Since we have zero, after header and FCS length subtraction we will get negative max_mtu size. This issue was affecting DSA drivers with MTU support (for example KSZ9477). Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de> Reviewed-by: Claudiu Beznea <claudiu.beznea@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kees Cook authored
Under CONFIG_FORTIFY_SOURCE=y and CONFIG_UBSAN_BOUNDS=y, Clang is bugged here for calculating the size of the destination buffer (0x10 instead of 0x14). This copy is a fixed size (sizeof(struct fw_section_info_st)), with the source and dest being struct fw_section_info_st, so the memcpy should be safe, assuming the index is within bounds, which is UBSAN_BOUNDS's responsibility to figure out. Avoid the whole thing and just do a direct assignment. This results in no change to the executable code. Cc: "David S. Miller" <davem@davemloft.net> Cc: Eric Dumazet <edumazet@google.com> Cc: Jakub Kicinski <kuba@kernel.org> Cc: Paolo Abeni <pabeni@redhat.com> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Tom Rix <trix@redhat.com> Cc: Leon Romanovsky <leon@kernel.org> Cc: Jiri Pirko <jiri@nvidia.com> Cc: Vladimir Oltean <olteanv@gmail.com> Cc: Simon Horman <simon.horman@corigine.com> Cc: netdev@vger.kernel.org Cc: llvm@lists.linux.dev Link: https://github.com/ClangBuiltLinux/linux/issues/1592Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> # build Signed-off-by: David S. Miller <davem@davemloft.net>
-
Oleksij Rempel authored
Current kernel will compile this driver with warnings. This patch will fix it. drivers/net/ethernet/atheros/ag71xx.c: In function 'ag71xx_fast_reset': drivers/net/ethernet/atheros/ag71xx.c:996:31: warning: passing argument 2 of 'ag71xx_hw_set _macaddr' discards 'const' qualifier from pointer target type [-Wdiscarded-qualifiers] 996 | ag71xx_hw_set_macaddr(ag, dev->dev_addr); | ~~~^~~~~~~~~~ drivers/net/ethernet/atheros/ag71xx.c:951:69: note: expected 'unsigned char *' but argument is of type 'const unsigned char *' 951 | static void ag71xx_hw_set_macaddr(struct ag71xx *ag, unsigned char *mac) | ~~~~~~~~~~~~~~~^~~ drivers/net/ethernet/atheros/ag71xx.c: In function 'ag71xx_open': drivers/net/ethernet/atheros/ag71xx.c:1441:32: warning: passing argument 2 of 'ag71xx_hw_se t_macaddr' discards 'const' qualifier from pointer target type [-Wdiscarded-qualifiers] 1441 | ag71xx_hw_set_macaddr(ag, ndev->dev_addr); | ~~~~^~~~~~~~~~ drivers/net/ethernet/atheros/ag71xx.c:951:69: note: expected 'unsigned char *' but argument is of type 'const unsigned char *' 951 | static void ag71xx_hw_set_macaddr(struct ag71xx *ag, unsigned char *mac) | ~~~~~~~~~~~~~~~^~~ Fixes: adeef3e3 ("net: constify netdev->dev_addr") Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Remove accidental dup of tcp_wmem_schedule. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ong Boon Leong says: ==================== pcs-xpcs, stmmac: add 1000BASE-X AN for network switch Thanks for v4 review feedback in [1] and [2]. I have changed the v5 implementation as follow. v5 changes: 1/5 - No change from v4. 2/5 - No change from v4. 3/5 - [Fix] make xpcs_modify_changed() static and use mdiodev_modify_changed() for cleaner code as suggested by Russell King. 4/5 - [Fix] Use fwnode_get_phy_mode() as recommended by Andrew Lunn. 5/5 - [Fix] Make fwnode = of_fwnode_handle(priv->plat->phylink_node) order after priv = netdev_priv(dev). v4 changes: 1/5 - Squash v3:1/7 & 2/7 patches into v4:1/6 so that it passes build. 2/5 - [No change] same as v3:3/7 3/5 - [Fix] Fix issues identified by Russell in [1] 4/5 - [Fix] Drop v3:5/7 patch per input by Russell in [2] and make dwmac-intel clear the ovr_an_inband flag if fixed-link is used in ACPI _DSD. 5/5 - [No change] same as v3:7/7 For the steps to setup ACPI _DSD and checking, they are the same as in [3] Reference: [1] https://patchwork.kernel.org/comment/24894239/ [2] https://patchwork.kernel.org/comment/24895330/ [3] https://patchwork.kernel.org/project/netdevbpf/cover/20220610033610.114084-1-boon.leong.ong@intel.com/ ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ong Boon Leong authored
stmmac_mdio_register() lacks fixed-link consideration and only skip PHY scanning if it has done DT style PHY discovery. So, for DT or ACPI _DSD setting of fixed-link, the PHY scanning should not happen. v2: fix incorrect order related to fwnode that is not caught in non-DT platform. Tested-by: Emilio Riva <emilio.riva@ericsson.com> Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ong Boon Leong authored
Currently, phy_interface for TSN controller instance is set based on its PCI Device ID. For SGMII PHY interface, phy_interface default to PHY_INTERFACE_MODE_SGMII. As C37 AN supports both SGMII and 1000BASE-X mode, we add support for 'phy-mode' ACPI _DSD for port-specific and customer platform specific customization. v3: use fwnode_get_phy_mode() as suggested by Andrew Lunn in https://patchwork.kernel.org/comment/24895330/ v2: For platform that sets 'fixed-link' using ACPI _DSD, we will unset xpcs_an_inband within stmmac. Thanks to Russell King for his comment in https://patchwork.kernel.org/comment/24890222/ v1: Thanks to Andrew Lunn's guidance in https://patchwork.kernel.org/comment/24827101/Tested-by: Emilio Riva <emilio.riva@ericsson.com> Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ong Boon Leong authored
For CL37 1000BASE-X AN, DW xPCS does not support C22 method but offers C45 vendor-specific MII MMD for programming. We also add the ability to disable Autoneg (through ethtool for certain network switch that supports 1000BASE-X (1000Mbps and Full-Duplex) but not Autoneg capability. v4: Fixes to comment from Russell King. Thanks! https://patchwork.kernel.org/comment/24894239/ Make xpcs_modify_changed() as private, change to use mdiodev_modify_changed() for cleaner code. v3: Fixes to issues spotted by Russell King. Thanks! https://patchwork.kernel.org/comment/24890210/ Use phylink_mii_c22_pcs_decode_state(), remove unnecessary interrupt clearing and skip speed & duplex setting if AN is enabled. v2: Fixes to issues spotted by Russell King in v1. Thanks! https://patchwork.kernel.org/comment/24826650/ Use phylink_mii_c22_pcs_encode_advertisement() and implement C45 MII ADV handling since IP only support C45 access. Tested-by: Emilio Riva <emilio.riva@ericsson.com> Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ong Boon Leong authored
Currently, intel_speed_mode_2500() redundantly fix-up phy_interface to PHY_INTERFACE_MODE_SGMII if the underlying controller is in 1000Mbps SGMII mode. The value of phy_interface has been initialized earlier. This patch removes such redundancy to prepare for setting 1000BASE-X mode for certain hardware platform configuration. Also update the intel_mgbe_common_data() to include 1000BASE-X setup. Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ong Boon Leong authored
xpcs_config() has 'advertising' input that is required for C37 1000BASE-X AN in later patch series. So, we prepare xpcs_do_config() for it. For sja1105, xpcs_do_config() is used for xpcs configuration without depending on advertising input, so set to NULL. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ido Schimmel says: ==================== mlxsw: L3 HW stats improvements While testing L3 HW stats [1] on top of mlxsw, two issues were found: 1. Stats cannot be enabled for more than 205 netdevs. This was fixed in commit 4b7a632a ("mlxsw: spectrum_cnt: Reorder counter pools"). 2. ARP packets are counted as errors. Patch #1 takes care of that. See the commit message for details. The goal of the majority of the rest of the patches is to add selftests that would have discovered that only about 205 netdevs can have L3 HW stats supported, despite the HW supporting much more. The obvious place to plug this in is the scale test framework. The scale tests are currently testing two things: that some number of instances of a given resource can actually be created; and that when an attempt is made to create more than the supported amount, the failures are noted and handled gracefully. However the ability to allocate the resource does not mean that the resource actually works when passing traffic. For that, make it possible for a given scale to also test traffic. To that end, this patchset adds traffic tests. The goal of these is to run traffic and observe whether a sample of the allocated resource instances actually perform their task. Traffic tests are only run on the positive leg of the scale test (no point trying to pass traffic when the expected outcome is that the resource will not be allocated). They are opt-in, if a given test does not expose it, it is not run. The patchset proceeds as follows: - Patches #2 and #3 add to "devlink resource" support for number of allocated RIFs, and the capacity. This is necessary, because when evaluating how many L3 HW stats instances it should be possible to allocate, the limiting resource on Spectrum-2 and above currently is not the counters themselves, but actually the RIFs. - Patch #6 adds support for invocation of a traffic test, if a given scale tests exposes it. - Patch #7 adds support for skipping a given scale test. Because on Spectrum-2 and above, the limiting factor to L3 HW stats instances is actually the number of RIFs, there is no point in running the failing leg of a scale tests, because it would test exhaustion of RIFs, not of RIF counters. - With patch #8, the scale tests drivers pass the target number to the cleanup function of a scale test. - In patch #9, add a traffic test to the tc_flower selftests. This makes sure that the flow counters installed with the ACLs actually do count as they are supposed to. - In patch #10, add a new scale selftest for RIF counter scale, including a traffic test. - In patch #11, the scale target for the tc_flower selftest is dynamically set instead of being hard coded. [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ca0a53dcec9495d1dc5bbc369c810c520d728373 ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Instead of hard coding the scale target in the test, dynamically set it based on the maximum number of flow counters and their current occupancy. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
This tests creates as many RIFs as possible, ideally more than there can be RIF counters (though that is currently only possible on Spectrum-1). It then tries to enable L3 HW stats on each of the RIFs. It also contains the traffic test, which tries to run traffic through a log2 of those counters and checks that the traffic is shown in the counter values. Like with tc_flower traffic test, take a log2 subset of rules. The logic behind picking log2 rules is that then every bit of the instantiated item's number is exercised. This should catch issues whether they happen at the high end, low end, or somewhere in between. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
Add a test that checks that the created filters do actually trigger on matching traffic. Exercising all the rules would be a very lengthy process. Instead, take a log2 subset of rules. The logic behind picking log2 rules is that then every bit of the instantiated item's number is exercised. This should catch issues whether they happen at the high end, low end, or somewhere in between. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
The scale tests are verifying behavior of mlxsw when number of instances of some resource reaches the ASIC capacity. The number of instances is referred to as "target" number. No scale tests so far needed to know this target number to clean up. E.g. the tc_flower simply removes the clsact qdisc that all the tested filters are hooked onto, and that takes care of collecting all the filters. However, for the RIF counter test, which is being added in a future patch, VLAN netdevices are created. These are created as part of the test, but of course the cleanup needs to undo them again. For that it needs to know how many there were. To support this usage, pass the target number to the cleanup callback. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
The scale tests are currently testing two things: that some number of instances of a given resource can actually be created; and that when an attempt is made to create more than the supported amount, the failures are noted and handled gracefully. Sometimes the scale test depends on more than one resource. In particular, a following patch will add a RIF counter scale test, which depends on the number of RIF counters that can be bound, and also on the number of RIFs that can be created. When the test is limited by the auxiliary resource and not by the primary one, there's no point trying to run the overflow test, because it would be testing exhaustion of the wrong resource. To support this use case, when the $test_get_target yields 0, skip the test instead. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
The scale tests are currently testing two things: that some number of instances of a given resource can actually be created; and that when an attempt is made to create more than the supported amount, the failures are noted and handled gracefully. However the ability to allocate the resource does not mean that the resource actually works when passing traffic. For that, make it possible for a given scale to also test traffic. Traffic test is only run on the positive leg of the scale test (no point trying to pass traffic when the expected outcome is that the resource will not be allocated). Traffic tests are opt-in, if a given test does not expose it, it is not run. To this end, delay the test cleanup until after the traffic test is run. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
The scale of each resource is tested in the following manner: 1. The scale target is queried. 2. The test setup is prepared. 3. The test is invoked. In some cases, the occupancy of a resource changes as part of the second step, requiring the test to return a scale target that takes this change into account. Make this more robust by re-querying the scale target after the second step. Another possible solution is to swap the first and second steps, but when a test needs to be skipped (i.e., scale target is zero), the setup would have been in vain. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amit Cohen authored
Using mlxsw driver, the configurations are offloaded just in case that there is a physical port which is enslaved to the virtual device (e.g., to a bridge). In 'mirror_gre_bridge_1q_lag' test, the bridge gets an address and route before there are ports in the bridge. It means that these configurations are not offloaded. Till now the test passes with mlxsw driver even that the RIF of the bridge is not in the hardware, because the ARP packets are trapped in layer 2 and also mirrored, so there is no real need of the RIF in hardware. The previous patch changed the traps 'ARP_REQUEST' and 'ARP_RESPONSE' to be done at layer 3 instead of layer 2. With this change the ARP packets are not trapped during the test, as the RIF is not in the hardware because of the order of configurations. Reorder the configurations to make them to be offloaded, then the test will pass with the change of the traps. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
The Spectrum ASIC has a limit on how many L3 devices (called RIFs) can be created. The limit depends on the ASIC and FW revision, and mlxsw reads it from the FW. In order to communicate both the number of RIFs that there can be, and how many are taken now (i.e. occupancy), introduce a corresponding devlink resource. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
In order to expose number of RIFs as a resource, it is going to be handy to have the number of currently-allocated RIFs as a single number. Introduce such. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amit Cohen authored
Currently, the traps 'ARP_REQUEST' and 'ARP_RESPONSE' occur at layer 2. To allow the packets to be flooded, they are configured with the action 'MIRROR_TO_CPU' which means that the CPU receives a replica of the packet. Today, Spectrum ASICs also support trapping ARP packets at layer 3. This behavior is better, then the packets can just be trapped and there is no need to mirror them. An additional motivation is that using the traps at layer 2, the ARP packets are dropped in the router as they do not have an IP header, then they are counted as error packets, which might confuse users. Add the relevant traps for layer 3 and use them instead of the existing traps. There is no visible change to user space. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Eric Dumazet says: ==================== tcp: final (?) round of mem pressure fixes While working on prior patch series (e10b02ee "Merge branch 'net-reduce-tcp_memory_allocated-inflation'"), I found that we could still have frozen TCP flows under memory pressure. I thought we had solved this in 2015, but the fix was not complete. v2: deal with zerocopy tx paths. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Blamed commit only dealt with applications issuing small writes. Issue here is that we allow to force memory schedule for the sk_buff allocation, but we have no guarantee that sendmsg() is able to copy some payload in it. In this patch, I make sure the socket can use up to tcp_wmem[0] bytes. For example, if we consider tcp_wmem[0] = 4096 (default on x86), and initial skb->truesize being 1280, tcp_sendmsg() is able to copy up to 2816 bytes under memory pressure. Before this patch a sendmsg() sending more than 2816 bytes would either block forever (if persistent memory pressure), or return -EAGAIN. For bigger MTU networks, it is advised to increase tcp_wmem[0] to avoid sending too small packets. v2: deal with zero copy paths. Fixes: 8e4d980a ("tcp: fix behavior for epoll edge trigger") Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Reviewed-by: Wei Wang <weiwan@google.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Blamed commit only dealt with applications issuing small writes. Issue here is that we allow to force memory schedule for the sk_buff allocation, but we have no guarantee that sendmsg() is able to copy some payload in it. In this patch, I make sure the socket can use up to tcp_wmem[0] bytes. For example, if we consider tcp_wmem[0] = 4096 (default on x86), and initial skb->truesize being 1280, tcp_sendmsg() is able to copy up to 2816 bytes under memory pressure. Before this patch a sendmsg() sending more than 2816 bytes would either block forever (if persistent memory pressure), or return -EAGAIN. For bigger MTU networks, it is advised to increase tcp_wmem[0] to avoid sending too small packets. v2: deal with zero copy paths. Fixes: 8e4d980a ("tcp: fix behavior for epoll edge trigger") Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Reviewed-by: Wei Wang <weiwan@google.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
sk_forced_mem_schedule() has a bug similar to ones fixed in commit 7c80b038 ("net: fix sk_wmem_schedule() and sk_rmem_schedule() errors") While this bug has little chance to trigger in old kernels, we need to fix it before the following patch. Fixes: d83769a5 ("tcp: fix possible deadlock in tcp_send_fin()") Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Reviewed-by: Wei Wang <weiwan@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrii Nakryiko authored
LLVM's lld linker doesn't have a universal architecture support (e.g., it definitely doesn't work on s390x), so be safe and force lld for urandom_read and liburandom_read.so only on x86 architectures. This should fix s390x CI runs. Fixes: 3e6fe5ce ("libbpf: Fix internal USDT address translation logic for shared libraries") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220617045512.1339795-1-andrii@kernel.org
-
Alexei Starovoitov authored
Maxim Mikityanskiy says: ==================== The first patch of this series is a documentation fix. The second patch allows BPF helpers to accept memory regions of fixed size without doing runtime size checks. The two next patches add new functionality that allows XDP to accelerate iptables synproxy. v1 of this series [1] used to include a patch that exposed conntrack lookup to BPF using stable helpers. It was superseded by series [2] by Kumar Kartikeya Dwivedi, which implements this functionality using unstable helpers. The third patch adds new helpers to issue and check SYN cookies without binding to a socket, which is useful in the synproxy scenario. The fourth patch adds a selftest, which includes an XDP program and a userspace control application. The XDP program uses socketless SYN cookie helpers and queries conntrack status instead of socket status. The userspace control application allows to tune parameters of the XDP program. This program also serves as a minimal example of usage of the new functionality. The last two patches expose the new helpers to TC BPF and extend the selftest. The draft of the new functionality was presented on Netdev 0x15 [3]. v2 changes: Split into two series, submitted bugfixes to bpf, dropped the conntrack patches, implemented the timestamp cookie in BPF using bpf_loop, dropped the timestamp cookie patch. v3 changes: Moved some patches from bpf to bpf-next, dropped the patch that changed error codes, split the new helpers into IPv4/IPv6, added verifier functionality to accept memory regions of fixed size. v4 changes: Converted the selftest to the test_progs runner. Replaced some deprecated functions in xdp_synproxy userspace helper. v5 changes: Fixed a bug in the selftest. Added questionable functionality to support new helpers in TC BPF, added selftests for it. v6 changes: Wrap the new helpers themselves into #ifdef CONFIG_SYN_COOKIES, replaced fclose with pclose and fixed the MSS for IPv6 in the selftest. v7 changes: Fixed the off-by-one error in indices, changed the section name to "xdp", added missing kernel config options to vmtest in CI. v8 changes: Properly rebased, dropped the first patch (the same change was applied by someone else), updated the cover letter. v9 changes: Fixed selftests for no_alu32. v10 changes: Selftests for s390x were blacklisted due to lack of support of kfunc, rebased the series, split selftests to separate commits, created ARG_PTR_TO_FIXED_SIZE_MEM and packed arg_size, addressed the rest of comments. [1]: https://lore.kernel.org/bpf/20211020095815.GJ28644@breakpoint.cc/t/ [2]: https://lore.kernel.org/bpf/20220114163953.1455836-1-memxor@gmail.com/ [3]: https://netdevconf.info/0x15/session.html?Accelerating-synproxy-with-XDP ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Maxim Mikityanskiy authored
This commit extends selftests for the new BPF helpers bpf_tcp_raw_{gen,check}_syncookie_ipv{4,6} to also test the TC BPF functionality added in the previous commit. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20220615134847.3753567-7-maximmi@nvidia.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Maxim Mikityanskiy authored
This commit allows the new BPF helpers to work in SKB context (in TC BPF programs): bpf_tcp_raw_{gen,check}_syncookie_ipv{4,6}. Using these helpers in TC BPF programs is not recommended, because it's unlikely that the BPF program will provide any substantional speedup compared to regular SYN cookies or synproxy, after the SKB is already created. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20220615134847.3753567-6-maximmi@nvidia.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Maxim Mikityanskiy authored
This commit adds selftests for the new BPF helpers: bpf_tcp_raw_{gen,check}_syncookie_ipv{4,6}. xdp_synproxy_kern.c is a BPF program that generates SYN cookies on allowed TCP ports and sends SYNACKs to clients, accelerating synproxy iptables module. xdp_synproxy.c is a userspace control application that allows to configure the following options in runtime: list of allowed ports, MSS, window scale, TTL. A selftest is added to prog_tests that leverages the above programs to test the functionality of the new helpers. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20220615134847.3753567-5-maximmi@nvidia.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Maxim Mikityanskiy authored
The new helpers bpf_tcp_raw_{gen,check}_syncookie_ipv{4,6} allow an XDP program to generate SYN cookies in response to TCP SYN packets and to check those cookies upon receiving the first ACK packet (the final packet of the TCP handshake). Unlike bpf_tcp_{gen,check}_syncookie these new helpers don't need a listening socket on the local machine, which allows to use them together with synproxy to accelerate SYN cookie generation. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20220615134847.3753567-4-maximmi@nvidia.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Maxim Mikityanskiy authored
Before this commit, the BPF verifier required ARG_PTR_TO_MEM arguments to be followed by ARG_CONST_SIZE holding the size of the memory region. The helpers had to check that size in runtime. There are cases where the size expected by a helper is a compile-time constant. Checking it in runtime is an unnecessary overhead and waste of BPF registers. This commit allows helpers to accept pointers to memory without the corresponding ARG_CONST_SIZE, given that they define the memory region size in struct bpf_func_proto and use ARG_PTR_TO_FIXED_SIZE_MEM type. arg_size is unionized with arg_btf_id to reduce the kernel image size, and it's valid because they are used by different argument types. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20220615134847.3753567-3-maximmi@nvidia.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-