- 24 Oct, 2017 40 commits
-
-
Gustavo A. R. Silva authored
In preparation to enabling -Wimplicit-fallthrough, mark switch cases where we are expecting to fall through. Notice that in this particular case I placed the "fall through" comment on its own line, which is what GCC is expecting to find. Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gustavo A. R. Silva authored
In preparation to enabling -Wimplicit-fallthrough, mark switch cases where we are expecting to fall through. Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Eric Dumazet says: ==================== ipv6: addrconf: hash improvements and cleanups Remove unecessary BH blocking, and bring IPv6 addrconf to modern world, with per netns hash perturbation and decent hash size. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
rcu_read_lock() is enough here, no need to block BH. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Table is really RCU protected, no need to block BH Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
rcu_read_lock() is enough here, no need to block BH. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
rcu_read_lock() is enough here, as inet6_ifa_finish_destroy() uses kfree_rcu() Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Bring IPv6 in par with IPv4 : - Use net_hash_mix() to spread addresses a bit more. - Use 256 slots hash table instead of 16 Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
ipv6_add_addr_hash() can compute the hash value outside of locked section and pass it to ipv6_chk_same_addr(). Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
ipv6_chk_same_addr() is only used by ipv6_add_addr_hash(), so moving it avoids a forward declaration. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Jakub Kicinski says: ==================== nfp: bpf: stack support in offload This series brings stack support for offload. We use the LMEM (Local memory) register file as memory to store the stack. Since this is a register file we need to do appropriate shifts on unaligned accesses. Verifier's state tracking helps us with that. LMEM can't be accessed directly, so we add support for setting pointer registers through which one can read/write LMEM. This set does not support accessing the stack when the alignment is not known. This can be added later (most likely using the byte_align instructions). There is also a number of optimizations which have been left out: - in more complex non aligned accesses, double shift and rotation can save us a cycle. This, however, leads to code explosion since all access sizes have to be coded separately; - since setting LM pointers costs around 5 cycles, we should be tracking their values to make sure we don't move them when they're already set correctly for earlier access; - in case of 8 byte access aligned to 4 bytes and crossing 32 byte boundary but not crossing a 64 byte boundary we don't have to increment the pointer, but this seems like a pretty rare case to justify the added complexity. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Loading 64bit constants require up to 4 load immediates, since we can only load 16 bits at a time. If the 32bit halves of the 64bit constant are the same, however, we can save a cycle by doing a register move instead of two loads of 16 bits. Note that we don't optimize the normal ALU64 load because even though it's a 64 bit load the upper half of the register is a coming from sign extension so we can load it in one cycle anyway. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
If stack pointer has a different value on different paths but the alignment to words (4B) remains the same, we can set a new LMEM access pointer to the calculated value and access whichever word it's pointing to. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
To access beyond 64th byte of the stack we need to set a new stack pointer register (LMEM is accessed indirectly through those pointers). Add a function for encoding local CSR access instruction. Use stack pointer number 3. Note that stack pointer registers allow us to index into 32 bytes of LMEM (with shift operations i.e. when operands are restricted). This means if access is crossing 32 byte boundary we must not use offsetting, we have to set the pointer to the exact address and move it with post-increments. We depend on the datapath placing the stack base address in GPR A22 for our use. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
As long as the verifier tells us the stack offset exactly we can render the LMEM reads quite easily. Simply make sure that the offset is constant for a given instruction and add it to the instruction's offset. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
When we are performing unaligned stack accesses in the 32-64B window we have to do a read-modify-write cycle. E.g. for reading 8 bytes from address 17: 0: tmp = stack[16] 1: gprLo = tmp >> 8 2: tmp = stack[20] 3: gprLo |= tmp << 24 4: tmp = stack[20] 5: gprHi = tmp >> 8 6: tmp = stack[24] 7: gprHi |= tmp << 24 The load on line 4 is unnecessary, because tmp already contains data from stack[20]. For write we can optimize both loads and writebacks away. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Add simple stack read support, similar to write in every aspect, but data flowing the other way. Note that unlike write which can be done in smaller than word quantities, if registers are loaded with less-than-word of stack contents - the values have to be zero extended. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Stack is implemented by the LMEM register file. Unaligned accesses to LMEM are not allowed. Accesses also have to be 4B wide. To support stack we need to make sure offsets of pointers are known at translation time (for now) and perform correct load/mask/shift operations. Since we can access first 64B of LMEM without much effort support only stacks not bigger than 64B. Following commits will extend the possible sizes beyond that. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
nfp_bpf_check_ptr() mostly looks at the pointer register. Add a temporary variable to shorten the code. While at it make sure we print error messages if translation fails to help users identify the problem (to be carried in ext_ack in due course). Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
The need to emitting a few nops will become more common soon as we add stack and map support. Add a helper. This allows for code to be shorter but also may be handy for marking the nops with a "reason" to ease applying optimizations. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Jakub Kicinski says: ==================== tools: bpftool: Add JSON output to bpftool Quentin says: This series introduces support for JSON output to all bpftool commands. It adds option parsing, and several options are created: * -j, --json Switch to JSON output. * -p, --pretty Switch to JSON and print it in a human-friendly fashion. * -h, --help Print generic help message. * -V, --version Print version number. This code uses a "json_writer", which is a copy of the one written by Stephen Hemminger in iproute2. --- I don't know if there is an easy way to share the code for json_write without copying the file, so I am very open to suggestions on this matter. ==================== Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Quentin Monnet authored
Update the documentation to provide help about JSON output generation, and add an example in bpftool-prog manual page. Also reintroduce an example that was left aside when the tool was moved from GitHub to the kernel sources, in order to show how to mount the bpffs file system (to pin programs) inside the bpftool-prog manual page. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Quentin Monnet authored
Make the look-and-feel of the manual pages somewhat closer to other manual pages, such as the ones from the utilities from iproute2, by highlighting more keywords. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Quentin Monnet authored
As all commands can now return JSON output (possibly just a "null" value), output of `bpftool --json batch file FILE` should also be fully JSON compliant. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Quentin Monnet authored
Turn err() and info() macros into functions. In order to avoid naming conflicts with variables in the code, rename them as p_err() and p_info() respectively. The behavior of these functions is similar to the one of the macros for plain output. However, when JSON output is requested, these macros return a JSON-formatted "error" object instead of printing a message to stderr. To handle error messages correctly with JSON, a modification was brought to their behavior nonetheless: the functions now append a end-of-line character at the end of the message. This way, we can remove end-of-line characters at the end of the argument strings, and not have them in the JSON output. All error messages are formatted to hold in a single call to p_err(), in order to produce a single JSON field. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Quentin Monnet authored
`bpftool batch file FILE` takes FILE as an argument and executes all the bpftool commands it finds inside (or stops if an error occurs). To obtain a consistent JSON output, create a root JSON array, then for each command create a new object containing two fields: one with the command arguments, the other with the output (which is the JSON object that the command would have produced, if called on its own). Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Quentin Monnet authored
Reuse the json_writer API introduced in an earlier commit to make bpftool able to generate JSON output on `bpftool map { show | dump | lookup | getnext }` commands. Remaining commands produce no output. Some functions have been spit into plain-output and JSON versions in order to remain readable. Outputs for sample maps have been successfully tested against a JSON validator. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Quentin Monnet authored
Add a new printing function to dump translated eBPF instructions as JSON. As for plain output, opcodes are printed only on request (when `opcodes` is provided on the command line). The disassembled output is generated by the same code that is used by the kernel verifier. Example output: $ bpftool --json --pretty prog dump xlated id 1 [{ "disasm": "(bf) r6 = r1" },{ "disasm": "(61) r7 = *(u32 *)(r6 +16)" },{ "disasm": "(95) exit" } ] $ bpftool --json --pretty prog dump xlated id 1 opcodes [{ "disasm": "(bf) r6 = r1", "opcodes": { "code": "0xbf", "src_reg": "0x1", "dst_reg": "0x6", "off": ["0x00","0x00" ], "imm": ["0x00","0x00","0x00","0x00" ] } },{ "disasm": "(61) r7 = *(u32 *)(r6 +16)", "opcodes": { "code": "0x61", "src_reg": "0x6", "dst_reg": "0x7", "off": ["0x10","0x00" ], "imm": ["0x00","0x00","0x00","0x00" ] } },{ "disasm": "(95) exit", "opcodes": { "code": "0x95", "src_reg": "0x0", "dst_reg": "0x0", "off": ["0x00","0x00" ], "imm": ["0x00","0x00","0x00","0x00" ] } } ] Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Quentin Monnet authored
Reuse the json_writer API introduced in an earlier commit to make bpftool able to generate JSON output on `bpftool prog show *` commands. A new printing function is created to be passed as an argument to the disassembler. Similarly to plain output, opcodes are printed on request. Outputs from sample programs have been successfully tested against a JSON validator. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Quentin Monnet authored
Reuse the json_writer API introduced in an earlier commit to make bpftool able to generate JSON output on `bpftool prog show *` commands. For readability, the code from show_prog() has been split into two functions, one for plain output, one for JSON. Outputs from sample programs have been successfully tested against a JSON validator. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Quentin Monnet authored
These two options can be used to ask for a JSON output (--j or -json), and to make this JSON human-readable (-p or --pretty). A json_writer object is created when JSON is required, and will be used in follow-up commits to produce JSON output. Note that --pretty implies --json. Update for the manual pages and interactive help messages comes in a later patch of the series. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Quentin Monnet authored
Add an option parsing facility to bpftool, in prevision of future options for demanding JSON output. Currently, two options are added: --help and --version, that act the same as the respective commands `help` and `version`. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Quentin Monnet authored
In prevision of following commits, supposed to add JSON output to the tool, two files are copied from the iproute2 repository (taken at commit 268a9eee985f): lib/json_writer.c and include/json_writer.h. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Song Liu says: ==================== net: add a set of tracepoints to tcp stack Changes from v1: Fix build error (with ipv6 as ko) by adding EXPORT_TRACEPOINT_SYMBOL_GPL for trace_tcp_send_reset. These patches add the following tracepoints to tcp stack. tcp_send_reset tcp_receive_reset tcp_destroy_sock tcp_set_state These tracepoints can be used to track TCP state changes. Such state changes include but are not limited to: connection establish, connection termination, tx and rx of RST, various retransmits. Currently, we use the following kprobes to trace these events: int kprobe__tcp_validate_incoming int kprobe__tcp_send_active_reset int kprobe__tcp_v4_send_reset int kprobe__tcp_v6_send_reset int kprobe__tcp_v4_destroy_sock int kprobe__tcp_set_state int kprobe__tcp_retransmit_skb These tracepoints will help us simplify this work. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Song Liu authored
This patch adds tracepoint trace_tcp_set_state. Besides usual fields (s/d ports, IP addresses), old and new state of the socket is also printed with TP_printk, with __print_symbolic(). Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Song Liu authored
This patch adds trace event trace_tcp_destroy_sock. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Song Liu authored
New tracepoint trace_tcp_receive_reset is added and called from tcp_reset(). This tracepoint is define with a new class tcp_event_sk. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Song Liu authored
New tracepoint trace_tcp_send_reset is added and called from tcp_v4_send_reset(), tcp_v6_send_reset() and tcp_send_active_reset(). Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Song Liu authored
Some functions that we plan to add trace points require const sk and/or skb. So we mark these fields as const in the tracepoint. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Song Liu authored
Introduce event class tcp_event_sk_skb for tcp tracepoints that have arguments sk and skb. Existing tracepoint trace_tcp_retransmit_skb() falls into this class. This patch rewrites the definition of trace_tcp_retransmit_skb() with tcp_event_sk_skb. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-