- 14 Nov, 2020 22 commits
-
-
Jakub Kicinski authored
Shannon Nelson says: ==================== ionic updates These updates are a bit of code cleaning and a minor bit of performance tweaking. v3: convert ionic_lif_quiesce() to void v2: added void cast on call to ionic_lif_quiesce() lowered batching threshold added patch to flatten calls to ionic_lif_rx_mode added patch to change from_ndo to can_sleep ==================== Link: https://lore.kernel.org/r/20201112182208.46770-1-snelson@pensando.ioSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Shannon Nelson authored
With a few more uses of true and false in function calls, we need to give them some useful names so we can tell from the calling point what we're doing. Signed-off-by: Shannon Nelson <snelson@pensando.io> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Shannon Nelson authored
Instead of having two different ways of expressing the same sleepability concept, using opposite logic, we can rework the from_ndo to can_sleep for a more consistent usage. Fixes: 1800eee1 ("net: ionic: Replace in_interrupt() usage.") Signed-off-by: Shannon Nelson <snelson@pensando.io> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Shannon Nelson authored
The _ionic_lif_rx_mode() is only used once and really doesn't need to be broken out. Signed-off-by: Shannon Nelson <snelson@pensando.io> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Shannon Nelson authored
We should be using the multicast sync routines for the multicast filters. Also, let's just flatten the logic a bit and pull the small unicast routine back into ionic_set_rx_mode(). Fixes: 1800eee1 ("net: ionic: Replace in_interrupt() usage.") Signed-off-by: Shannon Nelson <snelson@pensando.io> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Shannon Nelson authored
We don't need to refill the rx descriptors on every napi if only a few were handled. Waiting until we can batch up a few together will save us a few Rx cycles. Signed-off-by: Shannon Nelson <snelson@pensando.io> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Shannon Nelson authored
After the queues are stopped, expressly quiesce the lif. This assures that even if the queues were in an odd state, the firmware will close up everything cleanly. Signed-off-by: Shannon Nelson <snelson@pensando.io> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Shannon Nelson authored
Request a link check as soon as the netdev is registered rather than waiting for the watchdog to go off in order to get the interface operational a little more quickly. Signed-off-by: Shannon Nelson <snelson@pensando.io> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Shannon Nelson authored
Change the order of operations in the link_up handling to be sure that the queues are up and ready before we announce that the link is up. Signed-off-by: Shannon Nelson <snelson@pensando.io> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
YueHaibing authored
Check PTR_ERR with IS_ERR to fix this. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Link: https://lore.kernel.org/r/20201112144936.54776-1-yuehaibing@huawei.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Lukas Bulwahn authored
Commit bdb7cc64 ("ipv6: Count interface receive statistics on the ingress netdev") removed all callees for ipv6_skb_idev(). Hence, since then, ipv6_skb_idev() is unused and make CC=clang W=1 warns: net/ipv6/exthdrs.c:909:33: warning: unused function 'ipv6_skb_idev' [-Wunused-function] So, remove this unused function and a -Wunused-function warning. Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com> Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Link: https://lore.kernel.org/r/20201113135012.32499-1-lukas.bulwahn@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski authored
Daniel Borkmann says: ==================== pull-request: bpf-next 2020-11-14 1) Add BTF generation for kernel modules and extend BTF infra in kernel e.g. support for split BTF loading and validation, from Andrii Nakryiko. 2) Support for pointers beyond pkt_end to recognize LLVM generated patterns on inlined branch conditions, from Alexei Starovoitov. 3) Implements bpf_local_storage for task_struct for BPF LSM, from KP Singh. 4) Enable FENTRY/FEXIT/RAW_TP tracing program to use the bpf_sk_storage infra, from Martin KaFai Lau. 5) Add XDP bulk APIs that introduce a defer/flush mechanism to optimize the XDP_REDIRECT path, from Lorenzo Bianconi. 6) Fix a potential (although rather theoretical) deadlock of hashtab in NMI context, from Song Liu. 7) Fixes for cross and out-of-tree build of bpftool and runqslower allowing build for different target archs on same source tree, from Jean-Philippe Brucker. 8) Fix error path in htab_map_alloc() triggered from syzbot, from Eric Dumazet. 9) Move functionality from test_tcpbpf_user into the test_progs framework so it can run in BPF CI, from Alexander Duyck. 10) Lift hashtab key_size limit to be larger than MAX_BPF_STACK, from Florian Lehner. Note that for the fix from Song we have seen a sparse report on context imbalance which requires changes in sparse itself for proper annotation detection where this is currently being discussed on linux-sparse among developers [0]. Once we have more clarification/guidance after their fix, Song will follow-up. [0] https://lore.kernel.org/linux-sparse/CAHk-=wh4bx8A8dHnX612MsDO13st6uzAz1mJ1PaHHVevJx_ZCw@mail.gmail.com/T/ https://lore.kernel.org/linux-sparse/20201109221345.uklbp3lzgq6g42zb@ltop.local/T/ * git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (66 commits) net: mlx5: Add xdp tx return bulking support net: mvpp2: Add xdp tx return bulking support net: mvneta: Add xdp tx return bulking support net: page_pool: Add bulk support for ptr_ring net: xdp: Introduce bulking for xdp tx return path bpf: Expose bpf_d_path helper to sleepable LSM hooks bpf: Augment the set of sleepable LSM hooks bpf: selftest: Use bpf_sk_storage in FENTRY/FEXIT/RAW_TP bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP bpf: Rename some functions in bpf_sk_storage bpf: Folding omem_charge() into sk_storage_charge() selftests/bpf: Add asm tests for pkt vs pkt_end comparison. selftests/bpf: Add skb_pkt_end test bpf: Support for pointers beyond pkt_end. tools/bpf: Always run the *-clean recipes tools/bpf: Add bootstrap/ to .gitignore bpf: Fix NULL dereference in bpf_task_storage tools/bpftool: Fix build slowdown tools/runqslower: Build bpftool using HOSTCC tools/runqslower: Enable out-of-tree build ... ==================== Link: https://lore.kernel.org/r/20201114020819.29584-1-daniel@iogearbox.netSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Steen Hegelund authored
Add VSC8572 and VSC8574 in the PTP configuration as they also support PTP. The relevant datasheets can be found here: - VSC8572: https://www.microchip.com/wwwproducts/en/VSC8572 - VSC8574: https://www.microchip.com/wwwproducts/en/VSC8574Signed-off-by: Steen Hegelund <steen.hegelund@microchip.com> Link: https://lore.kernel.org/r/20201112092250.914079-1-steen.hegelund@microchip.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Daniel Borkmann authored
Lorenzo Bianconi says: ==================== XDP bulk APIs introduce a defer/flush mechanism to return pages belonging to the same xdp_mem_allocator object (identified via the mem.id field) in bulk to optimize I-cache and D-cache since xdp_return_frame is usually run inside the driver NAPI tx completion loop. Convert mvneta, mvpp2 and mlx5 drivers to xdp_return_frame_bulk APIs. More details on benchmarks run on mlx5 can be found here: https://github.com/xdp-project/xdp-project/blob/master/areas/mem/xdp_bulk_return01.org Changes since v5: - do not keep looping over ptr_ring if the cache is full but release leftover pages running page_pool_return_page Changes since v4: - fix comments - introduce xdp_frame_bulk_init utility routine - compiler annotations for I-cache code layout - move rcu_read_lock outside fast-path - mlx5 xdp bulking code optimization Changes since v3: - align DEV_MAP_BULK_SIZE to XDP_BULK_QUEUE_SIZE - refactor page_pool_put_page_bulk to avoid code duplication Changes since v2: - move mvneta changes in a dedicated patch Changes since v1: - improve comments - rework xdp_return_frame_bulk routine logic - move count and xa fields at the beginning of xdp_frame_bulk struct - invert logic in page_pool_put_page_bulk for loop ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
-
Lorenzo Bianconi authored
Convert mlx5 driver to xdp_return_frame_bulk APIs. XDP_REDIRECT (upstream codepath): 8.9Mpps XDP_REDIRECT (upstream codepath + bulking APIs): 10.2Mpps Co-developed-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/250460319fd868b7b5668fc1deca74dd42813a90.1605267335.git.lorenzo@kernel.org
-
Lorenzo Bianconi authored
Convert mvpp2 driver to xdp_return_frame_bulk APIs. XDP_REDIRECT (upstream codepath): 1.79Mpps XDP_REDIRECT (upstream codepath + bulking APIs): 1.93Mpps Co-developed-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Matteo Croce <mcroce@microsoft.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/0b38c295e58e8ce251ef6b4e2187a2f457f9f7a3.1605267335.git.lorenzo@kernel.org
-
Lorenzo Bianconi authored
Convert mvneta driver to xdp_return_frame_bulk APIs. XDP_REDIRECT (upstream codepath): 275Kpps XDP_REDIRECT (upstream codepath + bulking APIs): 284Kpps Co-developed-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/9af8014006d022fc0fec78cdaa71beb56999750d.1605267335.git.lorenzo@kernel.org
-
Lorenzo Bianconi authored
Introduce the capability to batch page_pool ptr_ring refill since it is usually run inside the driver NAPI tx completion loop. Suggested-by: Jesper Dangaard Brouer <brouer@redhat.com> Co-developed-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Link: https://lore.kernel.org/bpf/08dd249c9522c001313f520796faa777c4089e1c.1605267335.git.lorenzo@kernel.org
-
Lorenzo Bianconi authored
XDP bulk APIs introduce a defer/flush mechanism to return pages belonging to the same xdp_mem_allocator object (identified via the mem.id field) in bulk to optimize I-cache and D-cache since xdp_return_frame is usually run inside the driver NAPI tx completion loop. The bulk queue size is set to 16 to be aligned to how XDP_REDIRECT bulking works. The bulk is flushed when it is full or when mem.id changes. xdp_frame_bulk is usually stored/allocated on the function call-stack to avoid locking penalties. Current implementation considers only page_pool memory model. Suggested-by: Jesper Dangaard Brouer <brouer@redhat.com> Co-developed-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Link: https://lore.kernel.org/bpf/e190c03eac71b20c8407ae0fc2c399eda7835f49.1605267335.git.lorenzo@kernel.org
-
Jisheng Zhang authored
Use the devm_reset_control_get_optional() and devm_clk_get_optional() rather than open coding them. Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com> Link: https://lore.kernel.org/r/20201112092606.5173aa6f@xhacker.debianSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Heiner Kallweit authored
We can simplify the for() condition and eliminate variable tx_left. The change also considers that tp->cur_tx may be incremented by a racing rtl8169_start_xmit(). In addition replace the write to tp->dirty_tx and the following smp_mb() with an equivalent call to smp_store_mb(). This implicitly adds a WRITE_ONCE() to the write. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Link: https://lore.kernel.org/r/c2e19e5e-3d3f-d663-af32-13c3374f5def@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Heiner Kallweit authored
tp->dirty_tx and tp->cur_tx may be changed by a racing rtl_tx() or rtl8169_start_xmit(). Use READ_ONCE() to annotate the races and ensure that the compiler doesn't use cached values. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Link: https://lore.kernel.org/r/5676fee3-f6b4-84f2-eba5-c64949a371ad@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 13 Nov, 2020 18 commits
-
-
Jakub Kicinski authored
Alex Elder says: ==================== net: ipa: two fixes This small series makes two fixes to the IPA code: - While reviewing something else I found that one of the resource limits on the SDM845 used the wrong value. The first patch fixes this. The correct value allocates more resources of this type for IPA to use, and otherwise does not change behavior. - When the IPA-resident microcontroller starts up it generates an event, which triggers an AP interrupt. The event merely provides some information for logging, which we don't support. We already ignore the event, and that's harmless. So this patch explicitly ignores it rather than issuing a warning when it occurs. ==================== Link: https://lore.kernel.org/r/20201112121157.19784-1-elder@linaro.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
The IPA-resident microcontroller has the ability to log various activity in an area of IPA shared memory. When the microcontroller starts it generates an event to the AP to provide information about the log. We don't support reading this log, and we can safely ignore the event. So do that rather than treating the log info event we receive as "unsupported." Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
I have discovered that the maximum number of source packet contexts configured for SDM845 is incorrect. Fix this error. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Edward Cree says: ==================== sfc: further EF100 encap TSO features This series adds support for GRE and GRE_CSUM TSO on EF100 NICs, as well as improving the handling of UDP tunnel TSO. ==================== Link: https://lore.kernel.org/r/eda2de73-edf2-8b92-edb9-099ebda09ebc@solarflare.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Edward Cree authored
We can treat SKB_GSO_GRE almost exactly the same as UDP tunnels, except that we don't want to edit the outer UDP len (as there isn't one). For SKB_GSO_GRE_CSUM, we have to use GSO_PARTIAL as the device doesn't support offload of non-UDP outer L4 checksums. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Acked-by: Martin Habets <mhabets@solarflare.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
-
Edward Cree authored
By asking the HW for the correct edits, we can make UDP tunnel TSO work without needing GSO_PARTIAL. So don't specify it in our netdev->gso_partial_features. However, retain GSO_PARTIAL support, as this will be used for other protocols later. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Acked-by: Martin Habets <mhabets@solarflare.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
-
Edward Cree authored
Our TSO descriptors got even more fussy. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Acked-by: Martin Habets <mhabets@solarflare.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
-
Jakub Kicinski authored
Alex Elder says: ==================== net: ipa: GSI register consolidation This series rearranges and consolidates some GSI register definitions. Its general aim is to make things more consistent, by: - Using enumerated types to define the values held in GSI register fields - Defining field values in "gsi_reg.h", together with the definition of the register (and field) that holds them - Format enumerated type members consistently, with hexidecimal numeric values, and assignments aligned on the same column There is one checkpatch "CHECK" warning requesting a blank line; I ignored that because my intention was to group certain definitions. ==================== Link: https://lore.kernel.org/r/20201110215922.23514-1-elder@linaro.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
Replace constants defined with an "_FVAL" suffix with values defined in enumerated types, to be consistent with other usage in the driver. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
The gsi_ch_cmd_opcode, gsi_evt_cmd_opcode, and gsi_generic_cmd_opcode enumerated types are values that fields in the GSI command registers can take on. Move their definitions out of "gsi.c" and into "gsi_reg.h", alongside the definition of registers they are associated with. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
The gsi_err_code and gsi_err_type enumerated types are values that fields in the GSI ERROR_LOG register can take on. Move their definitions out of "gsi.c" and into "gsi_reg.h", alongside the definition of the ERROR_LOG register offset and field symbols. Drop the "_ERR" suffix in the names of the gsi_err_code members. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
The gsi_channel_type enumerated type define values used for the channel type/protocol for event rings and channels. Move its definition out of "gsi.c" and into "gsi_reg.h", alongside the definition of the CH_C_CNTXT_0 register offset and its fields. Add a comment near the definition of the EV_CH_E_CNTXT_0 register indicating this type is used for its EV_CHTYPE field. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
The numeric values that represent the event ring channel type are identical to the values that represent the matching protocol used for a channel. Use a new gsi_channel_type enumerated type to represent the values programmed for both cases, using "CHANNEL_TYPE" in member names in place of "EVT_CHTYPE" and "CHANNEL_PROTOCOL". Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
Define the GSI global interrupt types with an enumerated type whose values are the bit positions representing the global interrupt types. Similarly, define the GSI general interrupt types with an enumerated type whose values are the bit positions of general interrupt types. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Wenlin Kang authored
Replace strncpy() with strscpy(), fixes the following warning: In function 'bearer_name_validate', inlined from 'tipc_enable_bearer' at net/tipc/bearer.c:246:7: net/tipc/bearer.c:141:2: warning: 'strncpy' specified bound 32 equals destination size [-Wstringop-truncation] strncpy(name_copy, name, TIPC_MAX_BEARER_NAME); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Signed-off-by: Wenlin Kang <wenlin.kang@windriver.com> Acked-by: Ying Xue <ying.xue@windriver.com> Link: https://lore.kernel.org/r/20201112093442.8132-1-wenlin.kang@windriver.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Merge tag 'mac80211-next-for-net-next-2020-11-13' of git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next Johannes Berg says: ==================== Some updates: * injection/radiotap updates for new test capabilities * remove WDS support - even years ago when we turned it off by default it was already basically unusable * support for HE (802.11ax) rates for beacons * support for some vendor-specific HE rates * many other small features/cleanups * tag 'mac80211-next-for-net-next-2020-11-13' of git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next: (21 commits) nl80211: fix kernel-doc warning in the new SAE attribute cfg80211: remove WDS code mac80211: remove WDS-related code rt2x00: remove WDS code b43legacy: remove WDS code b43: remove WDS code carl9170: remove WDS code ath9k: remove WDS code wireless: remove CONFIG_WIRELESS_WDS mac80211: assure that certain drivers adhere to DONT_REORDER flag mac80211: don't overwrite QoS TID of injected frames mac80211: adhere to Tx control flag that prevents frame reordering mac80211: add radiotap flag to assure frames are not reordered mac80211: save HE oper info in BSS config for mesh cfg80211: add support to configure HE MCS for beacon rate nl80211: fix beacon tx rate mask validation nl80211/cfg80211: fix potential infinite loop cfg80211: Add support to calculate and report 4096-QAM HE rates cfg80211: Add support to configure SAE PWE value to drivers ieee80211: Add definition for WFA DPP ... ==================== Link: https://lore.kernel.org/r/20201113101148.25268-1-johannes@sipsolutions.netSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
KP Singh authored
Sleepable hooks are never called from an NMI/interrupt context, so it is safe to use the bpf_d_path helper in LSM programs attaching to these hooks. The helper is not restricted to sleepable programs and merely uses the list of sleepable hooks as the initial subset of LSM hooks where it can be used. Signed-off-by: KP Singh <kpsingh@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20201113005930.541956-3-kpsingh@chromium.org
-
KP Singh authored
Update the set of sleepable hooks with the ones that do not trigger a warning with might_fault() when exercised with the correct kernel config options enabled, i.e. DEBUG_ATOMIC_SLEEP=y LOCKDEP=y PROVE_LOCKING=y This means that a sleepable LSM eBPF program can be attached to these LSM hooks. A new helper method bpf_lsm_is_sleepable_hook is added and the set is maintained locally in bpf_lsm.c Signed-off-by: KP Singh <kpsingh@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20201113005930.541956-2-kpsingh@chromium.org
-