- 16 Jan, 2021 14 commits
-
-
Vladimir Oltean authored
In general it is desirable that cleanup is the reverse process of setup. In this case I am not seeing any particular issue, but with the introduction of devlink-sb for felix, a non-obvious decision had to be made as to where to put its cleanup method. When there's a convention in place, that decision becomes obvious. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
The devlink function pointer names are super long, and they would break the alignment. So reindent the existing ops now by adding one tab. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
Switches that care about QoS might have hardware support for reserving buffer pools for individual ports or traffic classes, and configuring their sizes and thresholds. Through devlink-sb (shared buffers), this is all configurable, as well as their occupancy being viewable. Add the plumbing in DSA for these operations. Individual drivers still need to call devlink_sb_register() with the shared buffers they want to expose. A helper was not created in DSA for this purpose (unlike, say, dsa_devlink_params_register), since in my opinion it does not bring any benefit over plainly calling devlink_sb_register() directly. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
We'll need to read back the watermark thresholds and occupancy from hardware (for devlink-sb integration), not only to write them as we did so far in ocelot_port_set_maxlen. So introduce 2 new functions in struct ocelot_ops, similar to wm_enc, and implement them for the 3 supported mscc_ocelot switches. Remove the INUSE and MAXUSE unpacking helpers for the QSYS_RES_STAT register, because that doesn't scale with the number of switches that mscc_ocelot supports now. They have different bit widths for the watermarks, and we need function pointers to abstract that difference away. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
Instead of reading these values from the reference manual and writing them down into the driver, it appears that the hardware gives us the option of detecting them dynamically. The number of frame references corresponds to what the reference manual notes, however it seems that the frame buffers are reported as slightly less than the books would indicate. On VSC9959 (Felix), the books say it should have 128KB of packet buffer, but the registers indicate only 129840 bytes (126.79 KB). Also, the unit of measurement for FREECNT from the documentation of all these devices is incorrect (taken from an older generation). This was confirmed by Younes Leroul from Microchip support. Not having anything better to do with these values at the moment* (this will change soon), let's just print them. *The frame buffer size is, in fact, used to calculate the tail dropping watermarks. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski authored
Daniel Borkmann says: ==================== pull-request: bpf-next 2021-01-16 1) Extend atomic operations to the BPF instruction set along with x86-64 JIT support, that is, atomic{,64}_{xchg,cmpxchg,fetch_{add,and,or,xor}}, from Brendan Jackman. 2) Add support for using kernel module global variables (__ksym externs in BPF programs) retrieved via module's BTF, from Andrii Nakryiko. 3) Generalize BPF stackmap's buildid retrieval and add support to have buildid stored in mmap2 event for perf, from Jiri Olsa. 4) Various fixes for cross-building BPF sefltests out-of-tree which then will unblock wider automated testing on ARM hardware, from Jean-Philippe Brucker. 5) Allow to retrieve SOL_SOCKET opts from sock_addr progs, from Daniel Borkmann. 6) Clean up driver's XDP buffer init and split into two helpers to init per- descriptor and non-changing fields during processing, from Lorenzo Bianconi. 7) Minor misc improvements to libbpf & bpftool, from Ian Rogers. * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (41 commits) perf: Add build id data in mmap2 event bpf: Add size arg to build_id_parse function bpf: Move stack_map_get_build_id into lib bpf: Document new atomic instructions bpf: Add tests for new BPF atomic operations bpf: Add bitwise atomic instructions bpf: Pull out a macro for interpreting atomic ALU operations bpf: Add instructions for atomic_[cmp]xchg bpf: Add BPF_FETCH field / create atomic_fetch_add instruction bpf: Move BPF_STX reserved field check into BPF_STX verifier code bpf: Rename BPF_XADD and prepare to encode other atomics in .imm bpf: x86: Factor out a lookup table for some ALU opcodes bpf: x86: Factor out emission of REX byte bpf: x86: Factor out emission of ModR/M for *(reg + off) tools/bpftool: Add -Wall when building BPF programs bpf, libbpf: Avoid unused function warning on bpf_tail_call_static selftests/bpf: Install btf_dump test cases selftests/bpf: Fix installation of urandom_read selftests/bpf: Move generated test files to $(TEST_GEN_FILES) selftests/bpf: Fix out-of-tree build ... ==================== Link: https://lore.kernel.org/r/20210116012922.17823-1-daniel@iogearbox.netSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tom Rix authored
Defining DEBUG should only be done in development. So remove DEBUG. Signed-off-by: Tom Rix <trix@redhat.com> Reviewed-by: Marek Vasut <marex@denx.de> Link: https://lore.kernel.org/r/20210115153128.131026-1-trix@redhat.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tom Rix authored
Defining DEBUG should only be done in development. So remove DEBUG. Signed-off-by: Tom Rix <trix@redhat.com> Link: https://lore.kernel.org/r/20210114212917.48174-1-trix@redhat.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tom Rix authored
Defining DEBUG should only be done in development. So remove DEBUG. Signed-off-by: Tom Rix <trix@redhat.com> Link: https://lore.kernel.org/r/20210113215603.1721958-1-trix@redhat.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
As explained in commit 54a0ed0d ("net: dsa: provide an option for drivers to always receive bridge VLANs"), DSA has historically been skipping VLAN switchdev operations when the bridge wasn't in vlan_filtering mode, but the reason why it was doing that has never been clear. So the configure_vlan_while_not_filtering option is there merely to preserve functionality for existing drivers. It isn't some behavior that drivers should opt into. Ideally, when all drivers leave this flag set, we can delete the dsa_port_skip_vlan_configuration() function. New drivers always seem to omit setting this flag, for some reason. So let's reverse the logic: the DSA core sets it by default to true before the .setup() callback, and legacy drivers can turn it off. This way, new drivers get the new behavior by default, unless they explicitly set the flag to false, which is more obvious during review. Remove the assignment from drivers which were setting it to true, and add the assignment to false for the drivers that didn't previously have it. This way, it should be easier to see how many we have left. The following drivers: lan9303, mv88e6060 were skipped from setting this flag to false, because they didn't have any VLAN offload ops in the first place. The Broadcom Starfighter 2 driver calls the common b53_switch_alloc and therefore also inherits the configure_vlan_while_not_filtering=true behavior. Also, print a message through netlink extack every time a VLAN has been skipped. This is mildly annoying on purpose, so that (a) it is at least clear that VLANs are being skipped - the legacy behavior in itself is confusing, and the extack should be much more difficult to miss, unlike kernel logs - and (b) people have one more incentive to convert to the new behavior. No behavior change except for the added prints is intended at this time. $ ip link add br0 type bridge vlan_filtering 0 $ ip link set sw0p2 master br0 [ 60.315148] br0: port 1(sw0p2) entered blocking state [ 60.320350] br0: port 1(sw0p2) entered disabled state [ 60.327839] device sw0p2 entered promiscuous mode [ 60.334905] br0: port 1(sw0p2) entered blocking state [ 60.340142] br0: port 1(sw0p2) entered forwarding state Warning: dsa_core: skipping configuration of VLAN. # This was the pvid $ bridge vlan add dev sw0p2 vid 100 Warning: dsa_core: skipping configuration of VLAN. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Kurt Kanzenbach <kurt@linutronix.de> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Link: https://lore.kernel.org/r/20210115231919.43834-1-vladimir.oltean@nxp.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Christophe JAILLET authored
The wrappers in include/linux/pci-dma-compat.h should go away. The patch has been generated with the coccinelle script below and has been hand modified to replace GFP_ with a correct flag. It has been compile tested. When memory is allocated in 'netxen_get_minidump_template()' GFP_KERNEL can be used because its only caller, ' netxen_setup_minidump(()' already uses it and no lock is acquired in the between. When memory is allocated in other function in 'netxen_nic_ctx.c' GFP_KERNEL can be used because the call chain already uses GFP_KERNEL and no lock is taken in the between. The call chain is: netxen_nic_attach() --> netxen_alloc_sw_resources() : already uses GFP_KERNEL --> netxen_alloc_hw_resources() --> nx_fw_cmd_create_rx_ctx() --> nx_fw_cmd_create_tx_ctx() When memory is allocated in 'netxen_init_dummy_dma()' GFP_KERNEL can be used because its only call chain already uses it and no lock is acquired in the between. The call chain is: --> netxen_start_firmware --> netxen_request_firmware() --> request_firmware() --> _request_firmware(() --> fw_get_filesystem_firmware() --> __getname() : already uses GFP_KERNEL --> netxen_init_dummy_dma() @@ @@ - PCI_DMA_BIDIRECTIONAL + DMA_BIDIRECTIONAL @@ @@ - PCI_DMA_TODEVICE + DMA_TO_DEVICE @@ @@ - PCI_DMA_FROMDEVICE + DMA_FROM_DEVICE @@ @@ - PCI_DMA_NONE + DMA_NONE @@ expression e1, e2, e3; @@ - pci_alloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3; @@ - pci_zalloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3, e4; @@ - pci_free_consistent(e1, e2, e3, e4) + dma_free_coherent(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_single(e1, e2, e3, e4) + dma_map_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_single(e1, e2, e3, e4) + dma_unmap_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4, e5; @@ - pci_map_page(e1, e2, e3, e4, e5) + dma_map_page(&e1->dev, e2, e3, e4, e5) @@ expression e1, e2, e3, e4; @@ - pci_unmap_page(e1, e2, e3, e4) + dma_unmap_page(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_sg(e1, e2, e3, e4) + dma_map_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_sg(e1, e2, e3, e4) + dma_unmap_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_cpu(e1, e2, e3, e4) + dma_sync_single_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_device(e1, e2, e3, e4) + dma_sync_single_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_cpu(e1, e2, e3, e4) + dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_device(e1, e2, e3, e4) + dma_sync_sg_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2; @@ - pci_dma_mapping_error(e1, e2) + dma_mapping_error(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_dma_mask(e1, e2) + dma_set_mask(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_consistent_dma_mask(e1, e2) + dma_set_coherent_mask(&e1->dev, e2) Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Link: https://lore.kernel.org/r/20210113202519.487672-1-christophe.jaillet@wanadoo.frSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Tobias Waldekranz says: ==================== net: dsa: mv88e6xxx: LAG fixes The kernel test robot kindly pointed out that Global 2 support in mv88e6xxx is optional. This also made me realize that we should verify that the hardware actually supports LAG offloading before trying to configure it. ==================== Link: https://lore.kernel.org/r/20210115125259.22542-1-tobias@waldekranz.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tobias Waldekranz authored
There are chips that do have Global 2 registers, and therefore trunk mapping/mask tables are not available. Refuse the offload as early as possible on those devices. Fixes: 57e661aa ("net: dsa: mv88e6xxx: Link aggregation support") Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tobias Waldekranz authored
Support for Global 2 registers is build-time optional. In the case where it was not enabled the build would fail as no "dummy" implementation of these functions was available. Fixes: 57e661aa ("net: dsa: mv88e6xxx: Link aggregation support") Reported-by: kernel test robot <lkp@intel.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Tested-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 15 Jan, 2021 26 commits
-
-
Jakub Kicinski authored
George McCollister says: ==================== Arrow SpeedChips XRS700x DSA Driver This series adds a DSA driver for the Arrow SpeedChips XRS 7000 series of HSR/PRP gigabit switch chips. The chips use Flexibilis IP. More information can be found here: https://www.flexibilis.com/products/speedchips-xrs7000/ The switches have up to three RGMII ports and one MII port and are managed via mdio or i2c. They use a one byte trailing tag to identify the switch port when in managed mode so I've added a tag driver which implements this. This series contains minimal DSA functionality which may be built upon in future patches. The ultimate goal is to add HSR and PRP (IEC 62439-3 Clause 5 & 4) offloading with integration into net/hsr. ==================== Link: https://lore.kernel.org/r/20210114195734.55313-1-george.mccollister@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
George McCollister authored
Add documentation and an example for Arrow SpeedChips XRS7000 Series single chip Ethernet switches. Signed-off-by: George McCollister <george.mccollister@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
George McCollister authored
Add a driver with initial support for the Arrow SpeedChips XRS7000 series of gigabit Ethernet switch chips which are typically used in critical networking applications. The switches have up to three RGMII ports and one RMII port. Management to the switches can be performed over i2c or mdio. Support for advanced features such as PTP and HSR/PRP (IEC 62439-3 Clause 5 & 4) is not included in this patch and may be added at a later date. Signed-off-by: George McCollister <george.mccollister@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
George McCollister authored
Add support for Arrow SpeedChips XRS700x single byte tag trailer. This is modeled on tag_trailer.c which works in a similar way. Signed-off-by: George McCollister <george.mccollister@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Russell King says: ==================== Add further DT configuration for AT803x PHYs This patch series adds the ability to configure the SmartEEE feature in AT803x PHYs. SmartEEE defaults to enabled on these PHYs, and has a history of causing random sporadic link drops at Gigabit speeds. There appears to be two solutions to this. There is the approach that Freescale adopted early on, which is to disable the SmartEEE feature. However, this loses the power saving provided by EEE. Another solution was found by Jon Nettleton is to increase the Tw parameter for Gigabit links. This patch series adds support for both approaches, by adding a boolean: qca,disable-smarteee if one wishes to disable SmartEEE, and two properties to configure the SmartEEE Tw parameters: qca,smarteee-tw-us-100m qca,smarteee-tw-us-1g Sadly, the PHY quirk I merged a while back for AT8035 on iMX6 is broken - rather than disabling SmartEEE mode, it enables it. The addition of these properties will be sent to the appropriate platform maintainers - although for SolidRun platforms, we only make use of "qca,smarteee-tw-us-1g". ==================== Link: https://lore.kernel.org/r/20210114104455.GP1551@shell.armlinux.org.ukSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Russell King authored
SmartEEE for the atheros phy was deemed buggy by Freescale and commits were added to disable it for their boards. In initial testing, SolidRun found that the default settings were causing disconnects but by increasing the Tw buffer time we could allow enough time for all parts of the link to come out of a low power state and function properly without causing a disconnect. This allows us to have functional power savings of between 300 and 400mW, rather than disabling the feature altogether. This commit adds support for disabling SmartEEE and configuring the Tw parameters for 1G and 100M speeds. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Russell King authored
The SmartEEE feature of Atheros AR803x PHYs can cause the link to bounce. Add DT properties to allow SmartEEE to be disabled, and to allow the Tw parameters for 100M and 1G links to be configured. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alexei Starovoitov authored
Jiri Olsa says: ==================== hi, adding the support to have buildid stored in mmap2 event, so we can bypass the final perf record hunt on build ids. This patchset allows perf to record build ID in mmap2 event, and adds perf tooling to store/download binaries to .debug cache based on these build IDs. Note that the build id retrieval code is stolen from bpf code, where it's been used (together with file offsets) to replace IPs in user space stack traces. It's now added under lib directory. v7 changes: - included only missing kernel patches, cc-ed bpf@vger and rebased on bpf-next/master [Alexei] v6 changes: - last 4 patches rebased Arnaldo's perf/core v5 changes: - rebased on latest perf/core - several patches already pulled in - fixed trace+probe_vfs_getname.sh output redirection - fixed changelogs [Arnaldo] - renamed BUILD_ID_SIZE to BUILD_ID_SIZE_MAX [Song] v4 changes: - fixed typo in changelog [Namhyung] - removed force_download bool from struct dso_store_data, because it's not used [Namhyung] v3 changes: - added acks - removed forgotten debug code [Arnaldo] - fixed readlink termination [Ian] - fixed doc for --debuginfod=URLs [Ian] - adopted kernel's memchr_inv function and used it in build_id__is_defined function [Arnaldo] On recording server: - on the recording server we can run record with --buildid-mmap option to store build ids in mmap2 events: # perf record --buildid-mmap ^C[ perf record: Woken up 2 times to write data ] [ perf record: Captured and wrote 0.836 MB perf.data ] - it stores nothing to ~/.debug cache: # find ~/.debug find: ‘/root/.debug’: No such file or directory - and still reports properly: # perf report --stdio ... 99.82% swapper [kernel.kallsyms] [k] native_safe_halt 0.03% swapper [kernel.kallsyms] [k] finish_task_switch 0.02% swapper [kernel.kallsyms] [k] __softirqentry_text_start 0.01% kcompactd0 [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore 0.01% ksoftirqd/6 [kernel.kallsyms] [k] slab_free_freelist_hook 0.01% kworker/17:1H-x [kernel.kallsyms] [k] slab_free_freelist_hook - display used/hit build ids: # perf buildid-list | head -5 5dcec522abf136fcfd3128f47e131f2365834dd7 /proc/kcore 589e403a34f55486bcac848a45e00bcdeedd1ca8 /usr/lib64/libcrypto.so.1.1.1g 94569566d4eac7e9c87ba029d43d4e2158f9527e /usr/lib64/libpthread-2.30.so 559b9702bebe31c6d132c8dc5cc887673d65d5b5 /usr/lib64/libc-2.30.so 40da7abe89f631f60538a17686a7d65c6a02ed31 /usr/lib64/ld-2.30.so - store build id binaries into build id cache: # perf buildid-cache -a perf.data OK 5dcec522abf136fcfd3128f47e131f2365834dd7 /proc/kcore OK 589e403a34f55486bcac848a45e00bcdeedd1ca8 /usr/lib64/libcrypto.so.1.1.1g OK 94569566d4eac7e9c87ba029d43d4e2158f9527e /usr/lib64/libpthread-2.30.so OK 559b9702bebe31c6d132c8dc5cc887673d65d5b5 /usr/lib64/libc-2.30.so OK 40da7abe89f631f60538a17686a7d65c6a02ed31 /usr/lib64/ld-2.30.so OK a674f7a47c78e35a088104647b9640710277b489 /usr/sbin/sshd OK e5cb4ca25f46485bdbc691c3a92e7e111dac3ef2 /usr/bin/bash OK 9bc8589108223c944b452f0819298a0c3cba6215 /usr/bin/find # find ~/.debug | head -5 /root/.debug /root/.debug/proc /root/.debug/proc/kcore /root/.debug/proc/kcore/5dcec522abf136fcfd3128f47e131f2365834dd7 /root/.debug/proc/kcore/5dcec522abf136fcfd3128f47e131f2365834dd7/kallsyms - run debuginfod daemon to provide binaries to another server (below) (the initialization could take some time) # debuginfod -F / On another server: - copy perf.data from 'record' server and run: $ find ~/.debug/ find: ‘/home/jolsa/.debug/’: No such file or directory $ perf buildid-list | head -5 No kallsyms or vmlinux with build-id 5dcec522abf136fcfd3128f47e131f2365834dd7 was found 5dcec522abf136fcfd3128f47e131f2365834dd7 [kernel.kallsyms] 5784f813b727a50cfd3363234aef9fcbab685cc4 /lib/modules/5.10.0-rc2speed+/kernel/fs/xfs/xfs.ko 589e403a34f55486bcac848a45e00bcdeedd1ca8 /usr/lib64/libcrypto.so.1.1.1g 94569566d4eac7e9c87ba029d43d4e2158f9527e /usr/lib64/libpthread-2.30.so 559b9702bebe31c6d132c8dc5cc887673d65d5b5 /usr/lib64/libc-2.30.so - report does not show anything (kernel build id does not match): $ perf report --stdio ... 76.73% swapper [kernel.kallsyms] [k] 0xffffffff81aa8ebe 1.89% find [kernel.kallsyms] [k] 0xffffffff810f2167 0.93% sshd [kernel.kallsyms] [k] 0xffffffff8153380c 0.83% swapper [kernel.kallsyms] [k] 0xffffffff81104b0b 0.71% kworker/u40:2-e [kernel.kallsyms] [k] 0xffffffff810f3850 0.70% kworker/u40:0-e [kernel.kallsyms] [k] 0xffffffff810f3850 0.64% find [kernel.kallsyms] [k] 0xffffffff81a9ba0a 0.63% find [kernel.kallsyms] [k] 0xffffffff81aa93b0 - add build ids does not work, because existing binaries (on another server) have different build ids: $ perf buildid-cache -a perf.data No kallsyms or vmlinux with build-id 5dcec522abf136fcfd3128f47e131f2365834dd7 was found FAIL 5dcec522abf136fcfd3128f47e131f2365834dd7 [kernel.kallsyms] FAIL 5784f813b727a50cfd3363234aef9fcbab685cc4 /lib/modules/5.10.0-rc2speed+/kernel/fs/xfs/xfs.ko FAIL 589e403a34f55486bcac848a45e00bcdeedd1ca8 /usr/lib64/libcrypto.so.1.1.1g FAIL 94569566d4eac7e9c87ba029d43d4e2158f9527e /usr/lib64/libpthread-2.30.so FAIL 559b9702bebe31c6d132c8dc5cc887673d65d5b5 /usr/lib64/libc-2.30.so FAIL 40da7abe89f631f60538a17686a7d65c6a02ed31 /usr/lib64/ld-2.30.so FAIL a674f7a47c78e35a088104647b9640710277b489 /usr/sbin/sshd FAIL e5cb4ca25f46485bdbc691c3a92e7e111dac3ef2 /usr/bin/bash FAIL 9bc8589108223c944b452f0819298a0c3cba6215 /usr/bin/find - add build ids with debuginfod setup pointing to record server: $ perf buildid-cache -a perf.data --debuginfod http://192.168.122.174:8002 No kallsyms or vmlinux with build-id 5dcec522abf136fcfd3128f47e131f2365834dd7 was found OK 5dcec522abf136fcfd3128f47e131f2365834dd7 [kernel.kallsyms] OK 5784f813b727a50cfd3363234aef9fcbab685cc4 /lib/modules/5.10.0-rc2speed+/kernel/fs/xfs/xfs.ko OK 589e403a34f55486bcac848a45e00bcdeedd1ca8 /usr/lib64/libcrypto.so.1.1.1g OK 94569566d4eac7e9c87ba029d43d4e2158f9527e /usr/lib64/libpthread-2.30.so OK 559b9702bebe31c6d132c8dc5cc887673d65d5b5 /usr/lib64/libc-2.30.so OK 40da7abe89f631f60538a17686a7d65c6a02ed31 /usr/lib64/ld-2.30.so OK a674f7a47c78e35a088104647b9640710277b489 /usr/sbin/sshd OK e5cb4ca25f46485bdbc691c3a92e7e111dac3ef2 /usr/bin/bash OK 9bc8589108223c944b452f0819298a0c3cba6215 /usr/bin/find - and report works: $ perf report --stdio ... 76.73% swapper [kernel.kallsyms] [k] native_safe_halt 1.91% find [kernel.kallsyms] [k] queue_work_on 0.93% sshd [kernel.kallsyms] [k] iowrite16 0.83% swapper [kernel.kallsyms] [k] finish_task_switch 0.72% kworker/u40:2-e [kernel.kallsyms] [k] process_one_work 0.70% kworker/u40:0-e [kernel.kallsyms] [k] process_one_work 0.64% find [kernel.kallsyms] [k] syscall_enter_from_user_mode 0.63% find [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore - because we have the data in build id cache: $ find ~/.debug | head -10 .../.debug .../.debug/home .../.debug/home/jolsa .../.debug/home/jolsa/.cache .../.debug/home/jolsa/.cache/debuginfod_client .../.debug/home/jolsa/.cache/debuginfod_client/5dcec522abf136fcfd3128f47e131f2365834dd7 .../.debug/home/jolsa/.cache/debuginfod_client/5dcec522abf136fcfd3128f47e131f2365834dd7/executable .../.debug/home/jolsa/.cache/debuginfod_client/5dcec522abf136fcfd3128f47e131f2365834dd7/executable/5dcec522abf136fcfd3128f47e131f2365834dd7 .../.debug/home/jolsa/.cache/debuginfod_client/5dcec522abf136fcfd3128f47e131f2365834dd7/executable/5dcec522abf136fcfd3128f47e131f2365834dd7/elf .../.debug/home/jolsa/.cache/debuginfod_client/5dcec522abf136fcfd3128f47e131f2365834dd7/executable/5dcec522abf136fcfd3128f47e131f2365834dd7/debug Available also in: git://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git perf/build_id thanks, jirka ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiri Olsa authored
Adding support to carry build id data in mmap2 event. The build id data replaces maj/min/ino/ino_generation fields, which are also used to identify map's binary, so it's ok to replace them with build id data: union { struct { u32 maj; u32 min; u64 ino; u64 ino_generation; }; struct { u8 build_id_size; u8 __reserved_1; u16 __reserved_2; u8 build_id[20]; }; }; Replaced maj/min/ino/ino_generation fields give us size of 24 bytes. We use 20 bytes for build id data, 1 byte for size and rest is unused. There's new misc bit for mmap2 to signal there's build id data in it: #define PERF_RECORD_MISC_MMAP_BUILD_ID (1 << 14) Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/bpf/20210114134044.1418404-4-jolsa@kernel.org
-
Jiri Olsa authored
It's possible to have other build id types (other than default SHA1). Currently there's also ld support for MD5 build id. Adding size argument to build_id_parse function, that returns (if defined) size of the parsed build id, so we can recognize the build id type. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20210114134044.1418404-3-jolsa@kernel.org
-
Jiri Olsa authored
Moving stack_map_get_build_id into lib with declaration in linux/buildid.h header: int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id); This function returns build id for given struct vm_area_struct. There is no functional change to stack_map_get_build_id function. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20210114134044.1418404-2-jolsa@kernel.org
-
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski authored
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alexei Starovoitov authored
Brendan Jackman says: ==================== There's still one unresolved review comment from John[3] which I will resolve with a followup patch. Differences from v6->v7 [1]: * Fixed riscv build error detected by 0-day robot. Differences from v5->v6 [1]: * Carried Björn Töpel's ack for RISC-V code, plus a couple more acks from Yonhgong. * Doc fixups. * Trivial cleanups. Differences from v4->v5 [1]: * Fixed bogus type casts in interpreter that led to warnings from the 0day robot. * Dropped feature-detection for Clang per Andrii's suggestion in [4]. The selftests will now fail to build unless you have llvm-project commit 286daafd6512. The ENABLE_ATOMICS_TEST macro is still needed to support the no_alu32 tests. * Carried some Acks from John and Yonghong. * Dropped confusing usage of __atomic_exchange from prog_test in favour of __sync_lock_test_and_set. * [Really] got rid of all the forest of instruction macros (BPF_ATOMIC_FETCH_ADD and friends); now there's just BPF_ATOMIC_OP to define all the instructions as we use them in the verifier tests. This makes the atomic ops less special in that API, and I don't think the resulting usage is actually any harder to read. Differences from v3->v4 [1]: * Added one Ack from Yonghong. He acked some other patches but those have now changed non-trivally so I didn't add those acks. * Fixups to commit messages. * Fixed disassembly and comments: first arg to atomic_fetch_* is a pointer. * Improved prog_test efficiency. BPF progs are now all loaded in a single call, then the skeleton is re-used for each subtest. * Dropped use of tools/build/feature in favour of a one-liner in the Makefile. * Dropped the commit that created an emit_neg helper in the x86 JIT. It's not used any more (it wasn't used in v3 either). * Combined all the different filter.h macros (used to be BPF_ATOMIC_ADD, BPF_ATOMIC_FETCH_ADD, BPF_ATOMIC_AND, etc) into just BPF_ATOMIC32 and BPF_ATOMIC64. * Removed some references to BPF_STX_XADD from tools/, samples/ and lib/ that I missed before. Differences from v2->v3 [1]: * More minor fixes and naming/comment changes * Dropped atomic subtract: compilers can implement this by preceding an atomic add with a NEG instruction (which is what the x86 JIT did under the hood anyway). * Dropped the use of -mcpu=v4 in the Clang BPF command-line; there is no longer an architecture version bump. Instead a feature test is added to Kbuild - it builds a source file to check if Clang supports BPF atomics. * Fixed the prog_test so it no longer breaks test_progs-no_alu32. This requires some ifdef acrobatics to avoid complicating the prog_tests model where the same userspace code exercises both the normal and no_alu32 BPF test objects, using the same skeleton header. Differences from v1->v2 [1]: * Fixed mistakes in the netronome driver * Addd sub, add, or, xor operations * The above led to some refactors to keep things readable. (Maybe I should have just waited until I'd implemented these before starting the review...) * Replaced BPF_[CMP]SET | BPF_FETCH with just BPF_[CMP]XCHG, which include the BPF_FETCH flag * Added a bit of documentation. Suggestions welcome for more places to dump this info... The prog_test that's added depends on Clang/LLVM features added by Yonghong in commit 286daafd6512 (was https://reviews.llvm.org/D72184). This only includes a JIT implementation for x86_64 - I don't plan to implement JIT support myself for other architectures. Operations ========== This patchset adds atomic operations to the eBPF instruction set. The use-case that motivated this work was a trivial and efficient way to generate globally-unique cookies in BPF progs, but I think it's obvious that these features are pretty widely applicable. The instructions that are added here can be summarised with this list of kernel operations: * atomic[64]_[fetch_]add * atomic[64]_[fetch_]and * atomic[64]_[fetch_]or * atomic[64]_xchg * atomic[64]_cmpxchg The following are left out of scope for this effort: * 16 and 8 bit operations * Explicit memory barriers Encoding ======== I originally planned to add new values for bpf_insn.opcode. This was rather unpleasant: the opcode space has holes in it but no entire instruction classes[2]. Yonghong Song had a better idea: use the immediate field of the existing STX XADD instruction to encode the operation. This works nicely, without breaking existing programs, because the immediate field is currently reserved-must-be-zero, and extra-nicely because BPF_ADD happens to be zero. Note that this of course makes immediate-source atomic operations impossible. It's hard to imagine a measurable speedup from such instructions, and if it existed it would certainly not benefit x86, which has no support for them. The BPF_OP opcode fields are re-used in the immediate, and an additional flag BPF_FETCH is used to mark instructions that should fetch a pre-modification value from memory. So, BPF_XADD is now called BPF_ATOMIC (the old name is kept to avoid breaking userspace builds), and where we previously had .imm = 0, we now have .imm = BPF_ADD (which is 0). Operands ======== Reg-source eBPF instructions only have two operands, while these atomic operations have up to four. To avoid needing to encode additional operands, then: - One of the input registers is re-used as an output register (e.g. atomic_fetch_add both reads from and writes to the source register). - Where necessary (i.e. for cmpxchg) , R0 is "hard-coded" as one of the operands. This approach also allows the new eBPF instructions to map directly to single x86 instructions. [1] Previous iterations: v1: https://lore.kernel.org/bpf/20201123173202.1335708-1-jackmanb@google.com/ v2: https://lore.kernel.org/bpf/20201127175738.1085417-1-jackmanb@google.com/ v3: https://lore.kernel.org/bpf/X8kN7NA7bJC7aLQI@google.com/ v4: https://lore.kernel.org/bpf/20201207160734.2345502-1-jackmanb@google.com/ v5: https://lore.kernel.org/bpf/20201215121816.1048557-1-jackmanb@google.com/ v6: https://lore.kernel.org/bpf/20210112154235.2192781-1-jackmanb@google.com/ [2] Visualisation of eBPF opcode space: https://gist.github.com/bjackman/00fdad2d5dfff601c1918bc29b16e778 [3] Comment from John about propagating bounds in verifier: https://lore.kernel.org/bpf/5fcf0fbcc8aa8_9ab320853@john-XPS-13-9370.notmuch/ [4] Mail from Andrii about not supporting old Clang in selftests: https://lore.kernel.org/bpf/CAEf4BzYBddPaEzRUs=jaWSo5kbf=LZdb7geAUVj85GxLQztuAQ@mail.gmail.com/ ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Brendan Jackman authored
Document new atomic instructions. Signed-off-by: Brendan Jackman <jackmanb@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210114181751.768687-12-jackmanb@google.com
-
Brendan Jackman authored
The prog_test that's added depends on Clang/LLVM features added by Yonghong in commit 286daafd6512 (was https://reviews.llvm.org/D72184). Note the use of a define called ENABLE_ATOMICS_TESTS: this is used to: - Avoid breaking the build for people on old versions of Clang - Avoid needing separate lists of test objects for no_alu32, where atomics are not supported even if Clang has the feature. The atomics_test.o BPF object is built unconditionally both for test_progs and test_progs-no_alu32. For test_progs, if Clang supports atomics, ENABLE_ATOMICS_TESTS is defined, so it includes the proper test code. Otherwise, progs and global vars are defined anyway, as stubs; this means that the skeleton user code still builds. The atomics_test.o userspace object is built once and used for both test_progs and test_progs-no_alu32. A variable called skip_tests is defined in the BPF object's data section, which tells the userspace object whether to skip the atomics test. Signed-off-by: Brendan Jackman <jackmanb@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210114181751.768687-11-jackmanb@google.com
-
Brendan Jackman authored
This adds instructions for atomic[64]_[fetch_]and atomic[64]_[fetch_]or atomic[64]_[fetch_]xor All these operations are isomorphic enough to implement with the same verifier, interpreter, and x86 JIT code, hence being a single commit. The main interesting thing here is that x86 doesn't directly support the fetch_ version these operations, so we need to generate a CMPXCHG loop in the JIT. This requires the use of two temporary registers, IIUC it's safe to use BPF_REG_AX and x86's AUX_REG for this purpose. Signed-off-by: Brendan Jackman <jackmanb@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210114181751.768687-10-jackmanb@google.com
-
Brendan Jackman authored
Since the atomic operations that are added in subsequent commits are all isomorphic with BPF_ADD, pull out a macro to avoid the interpreter becoming dominated by lines of atomic-related code. Note that this sacrificies interpreter performance (combining STX_ATOMIC_W and STX_ATOMIC_DW into single switch case means that we need an extra conditional branch to differentiate them) in favour of compact and (relatively!) simple C code. Signed-off-by: Brendan Jackman <jackmanb@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210114181751.768687-9-jackmanb@google.com
-
Brendan Jackman authored
This adds two atomic opcodes, both of which include the BPF_FETCH flag. XCHG without the BPF_FETCH flag would naturally encode atomic_set. This is not supported because it would be of limited value to userspace (it doesn't imply any barriers). CMPXCHG without BPF_FETCH woulud be an atomic compare-and-write. We don't have such an operation in the kernel so it isn't provided to BPF either. There are two significant design decisions made for the CMPXCHG instruction: - To solve the issue that this operation fundamentally has 3 operands, but we only have two register fields. Therefore the operand we compare against (the kernel's API calls it 'old') is hard-coded to be R0. x86 has similar design (and A64 doesn't have this problem). A potential alternative might be to encode the other operand's register number in the immediate field. - The kernel's atomic_cmpxchg returns the old value, while the C11 userspace APIs return a boolean indicating the comparison result. Which should BPF do? A64 returns the old value. x86 returns the old value in the hard-coded register (and also sets a flag). That means return-old-value is easier to JIT, so that's what we use. Signed-off-by: Brendan Jackman <jackmanb@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210114181751.768687-8-jackmanb@google.com
-
Brendan Jackman authored
The BPF_FETCH field can be set in bpf_insn.imm, for BPF_ATOMIC instructions, in order to have the previous value of the atomically-modified memory location loaded into the src register after an atomic op is carried out. Suggested-by: Yonghong Song <yhs@fb.com> Signed-off-by: Brendan Jackman <jackmanb@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20210114181751.768687-7-jackmanb@google.com
-
Brendan Jackman authored
I can't find a reason why this code is in resolve_pseudo_ldimm64; since I'll be modifying it in a subsequent commit, tidy it up. Signed-off-by: Brendan Jackman <jackmanb@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20210114181751.768687-6-jackmanb@google.com
-
Brendan Jackman authored
A subsequent patch will add additional atomic operations. These new operations will use the same opcode field as the existing XADD, with the immediate discriminating different operations. In preparation, rename the instruction mode BPF_ATOMIC and start calling the zero immediate BPF_ADD. This is possible (doesn't break existing valid BPF progs) because the immediate field is currently reserved MBZ and BPF_ADD is zero. All uses are removed from the tree but the BPF_XADD definition is kept around to avoid breaking builds for people including kernel headers. Signed-off-by: Brendan Jackman <jackmanb@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Björn Töpel <bjorn.topel@gmail.com> Link: https://lore.kernel.org/bpf/20210114181751.768687-5-jackmanb@google.com
-
Brendan Jackman authored
A later commit will need to lookup a subset of these opcodes. To avoid duplicating code, pull out a table. The shift opcodes won't be needed by that later commit, but they're already duplicated, so fold them into the table anyway. Signed-off-by: Brendan Jackman <jackmanb@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20210114181751.768687-4-jackmanb@google.com
-
Brendan Jackman authored
The JIT case for encoding atomic ops is about to get more complicated. In order to make the review & resulting code easier, let's factor out some shared helpers. Signed-off-by: Brendan Jackman <jackmanb@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20210114181751.768687-3-jackmanb@google.com
-
Brendan Jackman authored
The case for JITing atomics is about to get more complicated. Let's factor out some common code to make the review and result more readable. NB the atomics code doesn't yet use the new helper - a subsequent patch will add its use as a side-effect of other changes. Signed-off-by: Brendan Jackman <jackmanb@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20210114181751.768687-2-jackmanb@google.com
-
Jakub Kicinski authored
Eran Ben Elisha says: ==================== Dissect PTP L2 packet header This series adds support for dissecting PTP L2 packet header (EtherType 0x88F7). For packet header dissecting, skb->protocol is needed. Add protocol parsing operation to vlan ops, to guarantee skb->protocol is set, as EtherType 0x88F7 occasionally follows a vlan header. ==================== Link: https://lore.kernel.org/r/1610478433-7606-1-git-send-email-eranbe@nvidia.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eran Ben Elisha authored
Add support for parsing PTP L2 packet header. Such packet consists of an L2 header (with ethertype of ETH_P_1588), PTP header, body and an optional suffix. Signed-off-by: Eran Ben Elisha <eranbe@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-