- 20 Jan, 2018 9 commits
-
-
Daniel Borkmann authored
Given the limit could potentially get further adjustments in the future, add it to the log so it becomes obvious what the current limit is w/o having to check the source first. This may also be helpful for debugging complexity related issues on kernels that backport from upstream. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Daniel Borkmann authored
For the BPF_REG_0 (BPF_REG_A in cBPF, respectively), we can use the short form of the opcode as dst mapping is on eax/rax and thus save a byte per such operation. Added to add/sub/and/or/xor for 32/64 bit when K immediate is used. There may be more such low-hanging fruit to add in future as well. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Daniel Borkmann authored
Given BPF reaches far beyond just networking these days, it was never intended to allow setting and in some cases reading those knobs out of a user namespace root running without CAP_SYS_ADMIN, thus tighten such access. Also the bpf_jit_enable = 2 debugging mode should only be allowed if kptr_restrict is not set since it otherwise can leak addresses to the kernel log. Dump a note to the kernel log that this is for debugging JITs only when enabled. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Daniel Borkmann authored
Having a pure_initcall() callback just to permanently enable BPF JITs under CONFIG_BPF_JIT_ALWAYS_ON is unnecessary and could leave a small race window in future where JIT is still disabled on boot. Since we know about the setting at compilation time anyway, just initialize it properly there. Also consolidate all the individual bpf_jit_enable variables into a single one and move them under one location. Moreover, don't allow for setting unspecified garbage values on them. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Daniel Borkmann authored
Add couple of missing test cases for eBPF div/mod by zero to the new test_verifier prog runtime feature. Also one for an empty prog and only exit. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Daniel Borkmann authored
Add a couple of test cases for interpreter and JIT that are related to an issue we faced some time ago in Cilium [1], which is fixed in LLVM with commit e53750e1e086 ("bpf: fix bug on silently truncating 64-bit immediate"). Test cases were run-time checking kernel to behave as intended which should also provide some guidance for current or new JITs in case they should trip over this. Added for cBPF and eBPF. [1] https://github.com/cilium/cilium/pull/2162Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Daniel Borkmann authored
Useful for porting cls_bpf programs w/o increasing program complexity limits much at the same time, so add the helper to XDP as well. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Daniel Borkmann authored
I've seen two patch proposals now for helper additions that used ARG_PTR_TO_MEM or similar in reg_X but no corresponding ARG_CONST_SIZE in reg_X+1. Verifier won't complain in such case, but it will omit verifying the memory passed to the helper thus ending up badly. Detect such buggy helper function signature and bail out during verification rather than finding them through review. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jesper Dangaard Brouer authored
The xdp_redirect_cpu sample have some "builtin" monitoring of the tracepoints for xdp_cpumap_*, but it is practical to have an external tool that can monitor these transpoint as an easy way to troubleshoot an application using XDP + cpumap. Specifically I need such external tool when working on Suricata and XDP cpumap redirect. Extend the xdp_monitor tool sample with monitoring of these xdp_cpumap_* tracepoints. Model the output format like xdp_redirect_cpu. Given I needed to handle per CPU decoding for cpumap, this patch also add per CPU info on the existing monitor events. This resembles part of the builtin monitoring output from sample xdp_rxq_info. Thus, also covering part of that sample in an external monitoring tool. Performance wise, the cpumap tracepoints uses bulking, which cause them to have very little overhead. Thus, they are enabled by default. Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
- 19 Jan, 2018 5 commits
-
-
Daniel Borkmann authored
Yonghong Song says: ==================== This patch set implements MAP_GET_NEXT_KEY command for LPM_TRIE map. This command is really useful for key enumeration, and for key deletion if what keys in the trie are unknown. Patch #1 implements the functionality in the kernel and patch #2 adds a test case in tools/testing/selftests/bpf. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Yonghong Song authored
A test case is added in tools/testing/selftests/bpf/test_lpm_map.c for MAP_GET_NEXT_KEY command. A four node trie, which is described in kernel/bpf/lpm_trie.c, is built and the MAP_GET_NEXT_KEY results are checked. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Yonghong Song authored
Current LPM_TRIE map type does not implement MAP_GET_NEXT_KEY command. This command is handy when users want to enumerate keys. Otherwise, a different map which supports key enumeration may be required to store the keys. If the map data is sparse and all map data are to be deleted without closing file descriptor, using MAP_GET_NEXT_KEY to find all keys is much faster than enumerating all key space. This patch implements MAP_GET_NEXT_KEY command for LPM_TRIE map. If user provided key pointer is NULL or the key does not have an exact match in the trie, the first key will be returned. Otherwise, the next key will be returned. In this implemenation, key enumeration follows a postorder traversal of internal trie. More specific keys will be returned first than less specific ones, given a sequence of MAP_GET_NEXT_KEY syscalls. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Shuah Khan authored
Update .gitignore with missing generated files. Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Roman Gushchin authored
Add BPF_MAP_TYPE_CPUMAP map type to the list of map type recognized by bpftool and define corresponding text representation. Signed-off-by: Roman Gushchin <guro@fb.com> Cc: Quentin Monnet <quentin.monnet@netronome.com> Cc: Jakub Kicinski <jakub.kicinski@netronome.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Alexei Starovoitov <ast@kernel.org> Acked-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
- 18 Jan, 2018 21 commits
-
-
Daniel Borkmann authored
Jakub Kicinski says: ==================== This set brings in the rest of map offload code held up by urgent fixes and improvements to the BPF arrays. The first 3 patches take care of array map offload, similarly to hash maps the attribute validation is split out to a separate map op, and used for both offloaded and non-offloaded case (allocation only happens if map is on the host). Offload support comes down to allowing this map type through the offload check in the core. NFP driver also rejects the delete operation in case of array maps. Subsequent patches add reporting of target device in a very similar way target device of programs is reported (ifindex+netns dev/ino). Netdevsim is extended with a trivial map implementation allowing us to test the offload in test_offload.py. Last patch adds a small busy wait to NFP map IO, this improves the response times which is especially useful for map dumps. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
Scheduling out and in for every FW message can slow us down unnecessarily. Our experiments show that even under heavy load the FW responds to 99.9% messages within 200 us. Add a short busy wait before entering the wait queue. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
Check map device information is reported correctly, and perform basic map operations. Check device destruction gets rid of the maps and map allocation failure path by telling netdevsim to reject map offload via DebugFS. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
Add to netdevsim ability to pretend it's offloading BPF maps. We only allow allocation of tiny 2 entry maps, to keep things simple. Mutex lock may seem heavy for the operations we perform, but we want to make sure callbacks can sleep. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
Print the information about device on which map is created. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
Tell user space about device on which the map was created. Unfortunate reality of user ABI makes sharing this code with program offload difficult but the information is the same. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
The special handling of different map types is left to the driver. Allow offload of array maps by simply adding it to accepted types. For nfp we have to make sure array elements are not deleted. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
Arraymap was not converted to use bpf_map_init_from_attr() to avoid merge conflicts with emergency fixes. Do it now. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
Use the new callback to perform allocation checks for array maps. The fd maps don't need a special allocation callback, they only need a special check callback. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Daniel Borkmann authored
Alexei Starovoitov says: ==================== BPF verifier has 700+ tests used to check correctness of the verifier. Beyond checking the verifier log tell kernel to run accepted programs as well via bpf_prog_test_run() command. That improves quality of the tests and increases bpf test coverage. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Alexei Starovoitov authored
to improve test coverage make test_verifier run all successfully loaded programs on 64-byte zero initialized data. For clsbpf and xdp it means empty 64-byte packet. For lwt and socket_filters it's 64-byte packet where skb->data points after L2. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Alexei Starovoitov authored
in order to improve test coverage allow socket_filter program type to be run via bpf_prog_test_run command. Since such programs can be loaded by non-root tighten permissions for bpf_prog_test_run to be root only to avoid surprises. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jesper Dangaard Brouer authored
Update tools/include/uapi/linux/bpf.h to bring it in sync with include/uapi/linux/bpf.h. The listed commits forgot to update it. Fixes: 02dd3291 ("bpf: finally expose xdp_rxq_info to XDP bpf-programs") Fixes: f19397a5 ("bpf: Add access to snd_cwnd and others in sock_ops") Fixes: 06ef0ccb ("bpf/cgroup: fix a verification error for a CGROUP_DEVICE type prog") Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Dan Carpenter authored
There is a static checker warning that "proglen" has an upper bound but no lower bound. The allocation will just fail harmlessly so it's not a big deal. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jesper Dangaard Brouer authored
Doc BPF ld/ldx size defines as comments in code, as it makes in faster to lookup in a programming/review setting, than looking up the sizes in Documentation/networking/filter.txt. Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Yonghong Song authored
Currently, for bpf_trace_printk helper, fake ip address 0x1 is used with comments saying that fake ip will not be printed. This is indeed true for 4.12 and earlier version, but for 4.13 and later version, the ip address will be printed if it cannot be resolved with kallsym. Running samples/bpf/tracex5 program and you will have the following in the debugfs trace_pipe output: ... <...>-1819 [003] .... 443.497877: 0x00000001: mmap <...>-1819 [003] .... 443.498289: 0x00000001: syscall=102 (one of get/set uid/pid/gid) ... The kernel commit changed this behavior is: commit feaf1283 Author: Steven Rostedt (VMware) <rostedt@goodmis.org> Date: Thu Jun 22 17:04:55 2017 -0400 tracing: Show address when function names are not found ... This patch changed the comment and also altered the fake ip address to 0x0 as users may think 0x1 has some special meaning while it doesn't. The new output: ... <...>-1799 [002] .... 25.953576: 0: mmap <...>-1799 [002] .... 25.953865: 0: read(fd=0, buf=00000000053936b5, size=512) ... Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jesper Dangaard Brouer authored
Improve the 'unknown reason' comment, with an actual explaination of why the ctx pkt-data pointers need to be loaded after the helper function bpf_xdp_adjust_meta(). Based on the explaination Daniel gave. Fixes: 36e04a2d ("samples/bpf: xdp2skb_meta shows transferring info from XDP to SKB") Reported-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Daniel Borkmann authored
Jakub Kicinski says: ==================== Jiong says: Currently bpftool could disassemble host jited image, for example x86_64, using libbfd. However it couldn't disassemble offload jited image. There are two reasons: 1. bpf_obj_get_info_by_fd/struct bpf_prog_info couldn't get the address of jited image and image's length. 2. Even after issue 1 resolved, bpftool couldn't figure out what is the offload arch from bpf_prog_info, therefore can't drive libbfd disassembler correctly. This patch set resolve issue 1 by introducing two new fields "jited_len" and "jited_image" in bpf_dev_offload. These two fields serve as the generic interface to communicate the jited image address and length for all offload targets to higher level caller. For example, bpf_obj_get_info_by_fd could use them to fill the userspace visible fields jited_prog_len and jited_prog_insns. This patch set resolve issue 2 by getting bfd backend name through "ifindex", i.e network interface index. v1: - Deduct bfd arch name through ifindex, i.e network interface index. First, map ifindex to devname through ifindex_to_name_ns, then get pci id through /sys/class/dev/DEVNAME/device/vendor. (Daniel, Alexei) ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jiong Wang authored
The current architecture detection method in bpftool is designed for host case. For offload case, we can't use the architecture of "bpftool" itself. Instead, we could call the existing "ifindex_to_name_ns" to get DEVNAME, then read pci id from /sys/class/dev/DEVNAME/device/vendor, finally we map vendor id to bfd arch name which will finally be used to select bfd backend for the disassembler. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jiong Wang authored
This patch set those new jit info fields introduced in this patch set. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jiong Wang authored
For host JIT, there are "jited_len"/"bpf_func" fields in struct bpf_prog used by all host JIT targets to get jited image and it's length. While for offload, targets are likely to have different offload mechanisms that these info are kept in device private data fields. Therefore, BPF_OBJ_GET_INFO_BY_FD syscall needs an unified way to get JIT length and contents info for offload targets. One way is to introduce new callback to parse device private data then fill those fields in bpf_prog_info. This might be a little heavy, the other way is to add generic fields which will be initialized by all offload targets. This patch follow the second approach to introduce two new fields in struct bpf_dev_offload and teach bpf_prog_get_info_by_fd about them to fill correct jited_prog_len and jited_prog_insns in bpf_prog_info. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
- 17 Jan, 2018 5 commits
-
-
David S. Miller authored
Merge tag 'linux-can-next-for-4.16-20180116' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next Marc Kleine-Budde says: ==================== pull-request: can-next 2018-01-16 this is a pull request for net-next/master consisting of 9 patches. This is a series of patches, some of them initially by Franklin S Cooper Jr, which was picked up by Faiz Abbas. Faiz Abbas added some patches while working on this series, I contributed one as well. The first two patches add support to CAN device infrastructure to limit the bitrate of a CAN adapter if the used CAN-transceiver has a certain maximum bitrate. The remaining patches improve the m_can driver. They add support for bitrate limiting to the driver, clean up the driver and add support for runtime PM. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Luis de Bethencourt authored
The trailing semicolon is an empty statement that does no operation. It is completely stripped out by the compiler. Removing it since it doesn't do anything. Fixes: 5f35227e ("net: Generalize ndo_gso_check to ndo_features_check") Signed-off-by: Luis de Bethencourt <luisbg@kernel.org> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ganesh Goudar authored
restructure the code which adds support for configuring PCIe VF via mgmt netdevice. which was added by commit 7829451c ("cxgb4: Add control net_device for configuring PCIe VF") Original work by: Casey Leedom <leedom@chelsio.com> Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kirill Tkhai authored
idr_find() is safe under rcu_read_lock() and maybe_get_net() guarantees that net is alive. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kirill Tkhai authored
peernet2id_alloc() is racy without rtnl_lock() as refcount_read(&peer->count) under net->nsid_lock does not guarantee, peer is alive: rcu_read_lock() peernet2id_alloc() .. spin_lock_bh(&net->nsid_lock) .. refcount_read(&peer->count) (!= 0) .. .. put_net() .. cleanup_net() .. for_each_net(tmp) .. spin_lock_bh(&tmp->nsid_lock) .. __peernet2id(tmp, net) == -1 .. .. .. .. __peernet2id_alloc(alloc == true) .. .. .. rcu_read_unlock() .. .. synchronize_rcu() .. kmem_cache_free(net) After the above situation, net::netns_id contains id pointing to freed memory, and any other dereferencing by the id will operate with this freed memory. Currently, peernet2id_alloc() is used under rtnl_lock() everywhere except ovs_vport_cmd_fill_info(), and this race can't occur. But peernet2id_alloc() is generic interface, and better we fix it before someone really starts use it in wrong context. v2: Don't place refcount_read(&net->count) under net->nsid_lock as suggested by Eric W. Biederman <ebiederm@xmission.com> v3: Rebase on top of net-next Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-