- 08 Dec, 2017 7 commits
-
-
John Fastabend authored
The per cpu qstats support was added with per cpu bstat support which is currently used by the ingress qdisc. This patch adds a set of helpers needed to make other qdiscs that use qstats per cpu as well. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
John Fastabend authored
sch_direct_xmit() uses qdisc_qlen as a return value but all call sites of the routine only check if it is zero or not. Simplify the logic so that we don't need to return an actual queue length value. This introduces a case now where sch_direct_xmit would have returned a qlen of zero but now it returns true. However in this case all call sites of sch_direct_xmit will implement a dequeue() and get a null skb and abort. This trades tracking qlen in the hotpath for an extra dequeue operation. Overall this seems to be good for performance. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
John Fastabend authored
This patch adds a flag for queueing disciplines to indicate the stack does not need to use the qdisc lock to protect operations. This can be used to build lockless scheduling algorithms and improving performance. The flag is checked in the tx path and the qdisc lock is only taken if it is not set. For now use a conditional if statement. Later we could be more aggressive if it proves worthwhile and use a static key or wrap this in a likely(). Also the lockless case drops the TCQ_F_CAN_BYPASS logic. The reason for this is synchronizing a qlen counter across threads proves to cost more than doing the enqueue/dequeue operations when tested with pktgen. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
John Fastabend authored
Currently __qdisc_run calls qdisc_run_end() but does not call qdisc_run_begin(). This makes it hard to track pairs of qdisc_run_{begin,end} across function calls. To simplify reading these code paths this patch moves begin/end calls into qdisc_run(). Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Toshiaki Makita authored
Since commit 39e6c820 ("net: solve a NAPI race") napi has been able to be rescheduled within napi_complete_done() even in non-busypoll case, but virtnet_poll() always enabled interrupts before complete, and when napi was rescheduled within napi_complete_done() it did not disable interrupts. This caused more interrupts when event idx is disabled. According to commit cbdadbbf ("virtio_net: fix race in RX VQ processing") we cannot place virtqueue_enable_cb_prepare() after NAPI_STATE_SCHED is cleared, so disable interrupts again if napi_complete_done() returned false. Tested with vhost-user of OVS 2.7 on host, which does not have the event idx feature. * Before patch: $ netperf -t UDP_STREAM -H 192.168.150.253 -l 60 -- -m 1472 MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.150.253 () port 0 AF_INET Socket Message Elapsed Messages Size Size Time Okay Errors Throughput bytes bytes secs # # 10^6bits/sec 212992 1472 60.00 32763206 0 6430.32 212992 60.00 23384299 4589.56 Interrupts on guest: 9872369 Packets/interrupt: 2.37 * After patch $ netperf -t UDP_STREAM -H 192.168.150.253 -l 60 -- -m 1472 MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.150.253 () port 0 AF_INET Socket Message Elapsed Messages Size Size Time Okay Errors Throughput bytes bytes secs # # 10^6bits/sec 212992 1472 60.00 32794646 0 6436.49 212992 60.00 32793501 6436.27 Interrupts on guest: 4941299 Packets/interrupt: 6.64 Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp> Acked-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller authored
Alexei Starovoitov says: ==================== pull-request: bpf-next 2017-12-07 The following pull-request contains BPF updates for your net-next tree. The main changes are: 1) Detailed documentation of BPF development process from Daniel. 2) Addition of is_fullsock, snd_cwnd and srtt_us fields to bpf_sock_ops from Lawrence. 3) Minor follow up for bpf_skb_set_tunnel_key() from William. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jason Wang authored
Private destructor could be called when register_netdev() fail with rtnl lock held. This will lead deadlock in tun_free_netdev() who tries to hold rtnl_lock. Fixing this by switching to use spinlock to synchronize. Fixes: 96f84061 ("tun: add eBPF based queue selection method") Reported-by: Eric Dumazet <eric.dumazet@gmail.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Willem de Bruijn <willemb@google.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 07 Dec, 2017 9 commits
-
-
David S. Miller authored
Ursula Braun says: ==================== smc: fixes 2017-12-07 here are some smc-patches. The initial 4 patches are cleanups. Patch 5 gets rid of ib_post_sends in tasklet context to avoid peer drops due to out-of-order receivals. Patch 6 makes sure, the Linux SMC code understands variable sized CLC proposal messages built according to RFC7609. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
According to RFC7609 [1] the CLC proposal message contains an area of unknown length for future growth. Additionally it may contain up to 8 IPv6 prefixes. The current version of the SMC-code does not understand CLC proposal messages using these variable length fields and, thus, is incompatible with SMC implementations in other operating systems. This patch makes sure, SMC understands incoming CLC proposals * with arbitrary length values for future growth * with up to 8 IPv6 prefixes [1] SMC-R Informational RFC: http://www.rfc-editor.org/info/rfc7609Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Hans Wippel <hwippel@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
The SMC protocol requires to send a separate consumer cursor update, if it cannot be piggybacked to updates of the producer cursor. When receiving a blocked signal from the sender, this update is sent already in tasklet context. In addition consumer cursor updates are sent after data receival. Sending of cursor updates is controlled by sequence numbers. Assuming receiving stray messages the receiver drops updates with older sequence numbers than an already received cursor update with a higher sequence number. Sending consumer cursor updates in tasklet context may result in wrong order sends and its corresponding drops at the receiver. Since it is sufficient to send consumer cursor updates once the data is received, this patch gets rid of the consumer cursor update in tasklet context to guarantee in-sequence arrival of cursor updates. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
When waiting for data to be received it must be checked if the peer signals shutdown. The SMC code uses two different checks for this purpose, even though just one check is sufficient. This patch removes the superfluous test for SOCK_DONE. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
The smc code never checks the sk_write_pending sock field. Thus there is no need to update it. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
Let smc_clc_send_decline() return with an error, if the amount sent is smaller than the length of an smc decline message. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
smc_close_active_abort() is used in smc_close.c only. Make it static. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
Introduce a configuration option: CONFIG_NET_DSA_LEGACY allowing to compile out support for the old platform device and Device Tree binding registration. Support for these configurations is scheduled to be removed in 4.17. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Chan authored
On some dual port NICs, the 2 ports have to be configured with compatible link speeds. Under some conditions, a port's configured speed may no longer be supported. The firmware will send a message to the driver when this happens. Improve this logic that prints out the warning by only printing it if we can determine the link speed that is no longer supported. If the speed is unknown or it is in autoneg mode, skip the warning message. Reported-by: Thomas Bogendoerfer <tbogendoerfer@suse.de> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Tested-by: Thomas Bogendoerfer <tbogendoerfer@suse.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 06 Dec, 2017 17 commits
-
-
Alexei Starovoitov authored
Daniel Borkmann says: ==================== Few BPF doc updates Two changes, i) add BPF trees into maintainers file, and ii) add a BPF doc around the development process, similarly as we have with netdev FAQ, but just describing BPF specifics. Thanks! ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Daniel Borkmann authored
In the same spirit of netdev FAQ, start a BPF FAQ as a collection of expectations and/or workflow details in the context of BPF patch processing. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Daniel Borkmann authored
i) Add the bpf and bpf-next trees to the maintainers entry so they can be found easily and picked up by test bots etc that would integrate all trees from maintainers file. Suggested by Stephen while integrating the trees into linux-next. ii) Add the two headers defining BPF/XDP tracepoints to the list of files as well. Suggested-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Gao Feng authored
The recv flow of ipvlan l2 mode performs as same as l3 mode for non-multicast packet, so use the existing func ipvlan_handle_mode_l3 instead of these duplicated statements in non-multicast case. Signed-off-by: Gao Feng <gfree.wind@vip.163.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Currently, whenever the NETIF_F_HW_TC feature changes, we silently always allow it, but we actually do not disable the flows in HW on disable. That breaks user's expectations. So just forbid the feature disable in case there are any filters offloaded. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Willem de Bruijn authored
The statement no longer serves a purpose. Commit fa35864e ("tuntap: Fix for a race in accessing numqueues") added the ACCESS_ONCE to avoid a race condition with skb_queue_len. Commit 436acceb ("tuntap: remove unnecessary sk_receive_queue length check during xmit") removed the affected skb_queue_len check. Commit 96f84061 ("tun: add eBPF based queue selection method") split the function, reading the field a second time in the callee. The temp variable is now only read once, so just remove it. Signed-off-by: Willem de Bruijn <willemb@google.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Prashant Bhole authored
t_name cannot be NULL since it is an array field of a struct. Replacing null check on static array with string length check using strnlen() Signed-off-by: Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp> Acked-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Cong Wang authored
TC actions are no longer freed in RCU callbacks and we should always have RTNL lock, so this spinlock is no longer needed. Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Jiri Pirko <jiri@mellanox.com> Cc: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Cong Wang authored
tcfm_dev always points to the correct netdev and we already hold a refcnt, so no need to use tcfm_ifindex to lookup again. If we would support moving target netdev across netns, using pointer would be better than ifindex. This also fixes dumping obsolete ifindex, now after the target device is gone we just dump 0 as ifindex. Cc: Jiri Pirko <jiri@mellanox.com> Cc: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
William Tu says: ==================== ipv6: add ip6erspan collect_md mode Similar to erspan collect_md mode in ipv4, the first patch adds support for ip6erspan collect metadata mode. The second patch adds the test case using bpf_skb_[gs]et_tunnel_key helpers. The corresponding iproute2 patch: https://marc.info/?l=linux-netdev&m=151251545410047&w=2 ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
William Tu authored
Extend the existing tests for ip6erspan. Signed-off-by: William Tu <u9012063@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
William Tu authored
Similar to ip6 gretap and ip4 gretap, the patch allows erspan tunnel to operate in collect metadata mode. bpf_skb_[gs]et_tunnel_key() helpers can make use of it right away. Signed-off-by: William Tu <u9012063@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
Smatch warns that: drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c:160 bnxt_tc_parse_actions() error: uninitialized symbol 'rc'. "rc" is either uninitialized or set to zero here so we can just remove the check. Fixes: 8c95f773 ("bnxt_en: add support for Flower based vxlan encap/decap offload") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Julia Cartwright says: ==================== macb rx filter cleanups Here's a proper patchset based on net-next. v1 -> v2: - Rebased on net-next - Add Nicolas's Acks - Reorder commits, putting the list_empty() cleanups prior to the others. - Added commit reverting the GFP_ATOMIC change. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julia Cartwright authored
Now that the rx_fs_lock is no longer held across allocation, it's safe to use GFP_KERNEL for allocating new entries. This reverts commit 81da3bf6 ("net: macb: change GFP_KERNEL to GFP_ATOMIC"). Cc: Julia Lawall <julia.lawall@lip6.fr> Signed-off-by: Julia Cartwright <julia@ni.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julia Cartwright authored
Commit ae8223de ("net: macb: Added support for RX filtering") introduces a lock, rx_fs_lock which is intended to protect the list of rx_flow items and synchronize access to the hardware rx filtering registers. However, the region protected by this lock is overscoped, unnecessarily including things like slab allocation. Reduce this lock scope to only include operations which must be performed atomically: list traversal, addition, and removal, and hitting the macb filtering registers. This fixes the use of kmalloc w/ GFP_KERNEL in atomic context. Fixes: ae8223de ("net: macb: Added support for RX filtering") Cc: Rafal Ozieblo <rafalo@cadence.com> Cc: Julia Lawall <julia.lawall@lip6.fr> Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com> Signed-off-by: Julia Cartwright <julia@ni.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julia Cartwright authored
The list_for_each_entry() macro already handles the case where the list is empty (by not executing the loop body). It's not necessary to handle this case specially, so stop doing so. Cc: Rafal Ozieblo <rafalo@cadence.com> Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com> Signed-off-by: Julia Cartwright <julia@ni.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 05 Dec, 2017 7 commits
-
-
Cong Wang authored
No one actually uses it. Cc: Jiri Pirko <jiri@mellanox.com> Cc: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Vivien Didelot says: ==================== net: dsa: use per-port upstream port An upstream port is a local switch port used to reach a CPU port. DSA still considers a unique CPU port in the whole switch fabric and thus return a unique upstream port for a given switch. This is wrong in a multiple CPU ports environment. We are now switching to using the dedicated CPU port assigned to each port in order to get rid of the deprecated unique tree CPU port. This patchset makes the dsa_upstream_port() helper take a port argument and goes one step closer complete support for multiple CPU ports. Changes in v2: - reverse-christmas-tree-fy variables ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
The current dsa_upstream_port() helper still assumes a unique CPU port in the whole switch fabric. This is becoming wrong, as every port in the fabric has its dedicated CPU port, thus every port has an upstream port. Add a port argument to the dsa_upstream_port() helper and fetch its CPU port instead of the deprecated unique fabric CPU port. A CPU or unused port has no dedicated CPU port, so return itself in this case. At the same time, change the return value from u8 to unsigned int since there is no need to limit the size here. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
DSA ports also need to have a dedicated CPU port assigned to them, because they need to know where to egress frames targeting the CPU, e.g. To_Cpu frames received on a Marvell Tag port. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
Move the setup of the global upstream port within the mv88e6xxx_setup_upstream_port function. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
Add a helper function to setup the upstream port of a given port. This is the port used to reach the dedicated CPU port. This function will be extended later to setup the global upstream port as well. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
The mv88e6xxx driver currently assumes a single CPU port in the fabric and thus floods frames with unknown DA on a single DSA port, the one that is one hop closer to the CPU port. With multiple CPU ports in mind, this isn't true anymore because CPU ports could be found behind both DSA ports of a device in-between others. For example in a A <-> B <-> C fabric, both A and C having CPU ports, device B will have to flood such frame to its two DSA ports. This patch considers both CPU and DSA ports of a device as upstream ports, where to flood frames with unknown DA addresses. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-