Commit a56497d3 authored by Quentin Monnet's avatar Quentin Monnet Committed by Daniel Borkmann

bpf: update bpf.h uapi header for tools

Bring fixes for eBPF helper documentation formatting to bpf.h under
tools/ as well.
Signed-off-by: default avatarQuentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
parent 79552fbc
...@@ -828,12 +828,12 @@ union bpf_attr { ...@@ -828,12 +828,12 @@ union bpf_attr {
* *
* Also, be aware that the newer helper * Also, be aware that the newer helper
* **bpf_perf_event_read_value**\ () is recommended over * **bpf_perf_event_read_value**\ () is recommended over
* **bpf_perf_event_read*\ () in general. The latter has some ABI * **bpf_perf_event_read**\ () in general. The latter has some ABI
* quirks where error and counter value are used as a return code * quirks where error and counter value are used as a return code
* (which is wrong to do since ranges may overlap). This issue is * (which is wrong to do since ranges may overlap). This issue is
* fixed with bpf_perf_event_read_value(), which at the same time * fixed with **bpf_perf_event_read_value**\ (), which at the same
* provides more features over the **bpf_perf_event_read**\ () * time provides more features over the **bpf_perf_event_read**\
* interface. Please refer to the description of * () interface. Please refer to the description of
* **bpf_perf_event_read_value**\ () for details. * **bpf_perf_event_read_value**\ () for details.
* Return * Return
* The value of the perf event counter read from the map, or a * The value of the perf event counter read from the map, or a
...@@ -1770,33 +1770,33 @@ union bpf_attr { ...@@ -1770,33 +1770,33 @@ union bpf_attr {
* *
* int bpf_get_stack(struct pt_regs *regs, void *buf, u32 size, u64 flags) * int bpf_get_stack(struct pt_regs *regs, void *buf, u32 size, u64 flags)
* Description * Description
* Return a user or a kernel stack in bpf program provided buffer. * Return a user or a kernel stack in bpf program provided buffer.
* To achieve this, the helper needs *ctx*, which is a pointer * To achieve this, the helper needs *ctx*, which is a pointer
* to the context on which the tracing program is executed. * to the context on which the tracing program is executed.
* To store the stacktrace, the bpf program provides *buf* with * To store the stacktrace, the bpf program provides *buf* with
* a nonnegative *size*. * a nonnegative *size*.
* *
* The last argument, *flags*, holds the number of stack frames to * The last argument, *flags*, holds the number of stack frames to
* skip (from 0 to 255), masked with * skip (from 0 to 255), masked with
* **BPF_F_SKIP_FIELD_MASK**. The next bits can be used to set * **BPF_F_SKIP_FIELD_MASK**. The next bits can be used to set
* the following flags: * the following flags:
* *
* **BPF_F_USER_STACK** * **BPF_F_USER_STACK**
* Collect a user space stack instead of a kernel stack. * Collect a user space stack instead of a kernel stack.
* **BPF_F_USER_BUILD_ID** * **BPF_F_USER_BUILD_ID**
* Collect buildid+offset instead of ips for user stack, * Collect buildid+offset instead of ips for user stack,
* only valid if **BPF_F_USER_STACK** is also specified. * only valid if **BPF_F_USER_STACK** is also specified.
* *
* **bpf_get_stack**\ () can collect up to * **bpf_get_stack**\ () can collect up to
* **PERF_MAX_STACK_DEPTH** both kernel and user frames, subject * **PERF_MAX_STACK_DEPTH** both kernel and user frames, subject
* to sufficient large buffer size. Note that * to sufficient large buffer size. Note that
* this limit can be controlled with the **sysctl** program, and * this limit can be controlled with the **sysctl** program, and
* that it should be manually increased in order to profile long * that it should be manually increased in order to profile long
* user stacks (such as stacks for Java programs). To do so, use: * user stacks (such as stacks for Java programs). To do so, use:
* *
* :: * ::
* *
* # sysctl kernel.perf_event_max_stack=<new value> * # sysctl kernel.perf_event_max_stack=<new value>
* *
* Return * Return
* a non-negative value equal to or less than size on success, or * a non-negative value equal to or less than size on success, or
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment