- 02 Dec, 2016 40 commits
-
-
Gregory CLEMENT authored
Until now the virtual address of the received buffer were stored in the cookie field of the rx descriptor. However, this field is 32-bits only which prevents to use the driver on a 64-bits architecture. With this patch the virtual address is stored in an array not shared with the hardware (no more need to use the DMA API). Thanks to this, it is possible to use cache contrary to the access of the rx descriptor member. The change is done in the swbm path only because the hwbm uses the cookie field, this also means that currently the hwbm is not usable in 64-bits. Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Reviewed-by: Jisheng Zhang <jszhang@marvell.com> Tested-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gregory CLEMENT authored
For HWBM all buffers are allocated in mvneta_bm_construct() and in runtime they are put into descriptors by hardware. There is no need to fill them at this point. Suggested-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Tested-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gregory CLEMENT authored
For small frame reuse the phys_addr variable instead of accessing the uncacheable value in the rx descriptor. Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Tested-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
David Ahern says: ==================== net: Add bpf support for sockets The recently added VRF support in Linux leverages the bind-to-device API for programs to specify an L3 domain for a socket. While SO_BINDTODEVICE has been around for ages, not every ipv4/ipv6 capable program has support for it. Even for those programs that do support it, the API requires processes to be started as root (CAP_NET_RAW) which is not desirable from a general security perspective. This patch set leverages Daniel Mack's work to attach bpf programs to a cgroup to provide a capability to set sk_bound_dev_if for all AF_INET{6} sockets opened by a process in a cgroup when the sockets are allocated. For example: 1. configure vrf (e.g., using ifupdown2) auto eth0 iface eth0 inet dhcp vrf mgmt auto mgmt iface mgmt vrf-table auto 2. configure cgroup mount -t cgroup2 none /tmp/cgroupv2 mkdir /tmp/cgroupv2/mgmt test_cgrp2_sock /tmp/cgroupv2/mgmt 15 3. set shell into cgroup (e.g., can be done at login using pam) echo $$ >> /tmp/cgroupv2/mgmt/cgroup.procs At this point all commands run in the shell (e.g, apt) have sockets automatically bound to the VRF (see output of ss -ap 'dev == <vrf>'), including processes not running as root. This capability enables running any program in a VRF context and is key to deploying Management VRF, a fundamental configuration for networking gear, with any Linux OS installation. This patchset also exports the socket family, type and protocol as read-only allowing bpf filters to deny a process in a cgroup the ability to open specific types of AF_INET or AF_INET6 sockets. v7 - comments from Alexei v6 - add export of socket family, type and protocol ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Ahern authored
Add examples preventing a process in a cgroup from opening a socket based family, protocol and type. Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Ahern authored
Add support for section names starting with cgroup/skb and cgroup/sock. Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Ahern authored
Add socket family, type and protocol to bpf_sock allowing bpf programs read-only access. Add __sk_flags_offset[0] to struct sock before the bitfield to programmtically determine the offset of the unsigned int containing protocol and type. Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Ahern authored
Add a simple program to demonstrate the ability to attach a bpf program to a cgroup that sets sk_bound_dev_if for AF_INET{6} sockets when they are created. Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Ahern authored
Add new cgroup based program type, BPF_PROG_TYPE_CGROUP_SOCK. Similar to BPF_PROG_TYPE_CGROUP_SKB programs can be attached to a cgroup and run any time a process in the cgroup opens an AF_INET or AF_INET6 socket. Currently only sk_bound_dev_if is exported to userspace for modification by a bpf program. This allows a cgroup to be configured such that AF_INET{6} sockets opened by processes are automatically bound to a specific device. In turn, this enables the running of programs that do not support SO_BINDTODEVICE in a specific VRF context / L3 domain. Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Ahern authored
Code move and rename only; no functional change intended. Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
My recent commit to get more precise rx/tx counters in ndo_get_stats64() can lead to crashes at device dismantle, as Jesper found out. We must prevent mlx4_en_fold_software_stats() trying to access tx/rx rings if they are deleted. Fix this by adding a test against priv->port_up in mlx4_en_fold_software_stats() Calling mlx4_en_fold_software_stats() from mlx4_en_stop_port() allows us to eventually broadcast the latest/current counters to rtnetlink monitors. Fixes: 40931b85 ("mlx4: give precise rx/tx bytes/packets counters") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-and-bisected-by: Jesper Dangaard Brouer <brouer@redhat.com> Tested-by: Jesper Dangaard Brouer <brouer@redhat.com> Cc: Tariq Toukan <tariqt@mellanox.com> Cc: Saeed Mahameed <saeedm@dev.mellanox.co.il> Acked-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sunil Goutham authored
Transmit queue timeout issue is seen in two cases - Due to a race condition btw setting stop_queue at xmit() and checking for stopped_queue in NAPI poll routine, at times transmission from a SQ comes to a halt. This is fixed by using barriers and also added a check for SQ free descriptors, incase SQ is stopped and there are only CQE_RX i.e no CQE_TX. - Contrary to an assumption, a HW errata where HW doesn't stop transmission even though there are not enough CQEs available for a CQE_TX is not fixed in T88 pass 2.x. This results in a Qset error with 'CQ_WR_FULL' stalling transmission. This is fixed by adjusting RXQ's RED levels for CQ level such that there is always enough space left for CQE_TXs. Signed-off-by: Sunil Goutham <sgoutham@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Hadar Hen Zion says: ==================== Offloading tc rules using underline Hardware device This series adds flower classifier support in offloading tc rules when the Software ingress device is different from the Hardware ingress device, such as when dealing with IP tunnels The first two patches are a small fixes to flower, checking the skip_hw flag wasn't set before calling the Hardware offloading functions which will try to offload the rule. The next two patches are infrastructure patches, a preparation for the fourth patch which is adding support in flower to offload rules when the ingress device is not a Hardware device and therefore can't offload. In this case ndo_setup_tc is called with the mirred (egress) device. The last three patchs are adding mlx5e support to offload rules using the new "egress_device" flag. Thanks, Hadar Changes from v0: - check if CONFIG_NET_CLS_ACT is defined befor calling tc_action_ops get_dev() ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hadar Hen Zion authored
When ndo_setup_tc is called with an egress_dev flag set, it means that the ndo call was executed on the mirred action (egress) device and not on the ingress device. In order to support this kind of ndo_setup_tc call, and insert the correct decap rule to the hardware, the uplink device on the same eswitch should be found. Currently, we use this resolution between the mirred device and the uplink on the same eswitch to offload vxlan shared device decap rules. Signed-off-by: Hadar Hen Zion <hadarh@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hadar Hen Zion authored
Replace the representor private data to a net_device pointer holding the representor netdevice, instead of void pointer holding mlx5e_priv. It will be used by a new eswitch service function, returning the uplink representor netdevice. Signed-off-by: Hadar Hen Zion <hadarh@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hadar Hen Zion authored
The VF Representor udp tunnel ndo entries were removed by mistake, return them. Fixes: 370bad0f ('net/mlx5e: Support HW (offloaded) and SW counters for SRIOV switchdev mode') Signed-off-by: Hadar Hen Zion <hadarh@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hadar Hen Zion authored
In order to support hardware offloading when the device given by the tc rule is different from the Hardware underline device, extract the mirred (egress) device from the tc action when a filter is added, using the new tc_action_ops, get_dev(). Flower caches the information about the mirred device and use it for calling ndo_setup_tc in filter change, update stats and delete. Calling ndo_setup_tc of the mirred (egress) device instead of the ingress device will allow a resolution between the software ingress device and the underline hardware device. The resolution will take place inside the offloading driver using 'egress_device' flag added to tc_to_netdev struct which is provided to the offloading driver. Signed-off-by: Hadar Hen Zion <hadarh@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hadar Hen Zion authored
Adding support to a new tc_action_ops. get_dev is a general option which allows to get the underline device when trying to offload a tc rule. In case of mirred action the returned device is the mirred (egress) device. Signed-off-by: Hadar Hen Zion <hadarh@mellanox.com> Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hadar Hen Zion authored
Instead of providing many arguments to fl_hw_{replace/destroy}_filter functions, just provide cls_fl_filter struct that includes all the relevant args. This patches doesn't add any new functionality. Signed-off-by: Hadar Hen Zion <hadarh@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hadar Hen Zion authored
Check skip_hw flag isn't set before calling fl_hw_{replace/destroy}_filter and fl_hw_update_stats functions. Replace the call to tc_should_offload with tc_can_offload. tc_can_offload only checks if the device supports offloading, the check for skip_hw flag is done earlier in the flow. Signed-off-by: Hadar Hen Zion <hadarh@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hadar Hen Zion authored
Creating a difference between two possible cases: 1. Not offloading tc rule since the user sets 'skip_hw' flag. 2. Not offloading tc rule since the device doesn't support offloading. This patch doesn't add any new functionality. Signed-off-by: Hadar Hen Zion <hadarh@mellanox.com> Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Westphal authored
Eric says: "By looking at tcpdump, and TS val of xmit packets of multiple flows, we can deduct the relative qdisc delays (think of fq pacing). This should work even if we have one flow per remote peer." Having random per flow (or host) offsets doesn't allow that anymore so add a way to turn this off. Suggested-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Florian Westphal <fw@strlen.de> Acked-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Westphal authored
jiffies based timestamps allow for easy inference of number of devices behind NAT translators and also makes tracking of hosts simpler. commit ceaa1fef ("tcp: adding a per-socket timestamp offset") added the main infrastructure that is needed for per-connection ts randomization, in particular writing/reading the on-wire tcp header format takes the offset into account so rest of stack can use normal tcp_time_stamp (jiffies). So only two items are left: - add a tsoffset for request sockets - extend the tcp isn generator to also return another 32bit number in addition to the ISN. Re-use of ISN generator also means timestamps are still monotonically increasing for same connection quadruple, i.e. PAWS will still work. Includes fixes from Eric Dumazet. Signed-off-by: Florian Westphal <fw@strlen.de> Acked-by: Eric Dumazet <edumazet@google.com> Acked-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Manish Rangankar says: ==================== Add QLogic FastLinQ iSCSI (qedi) driver. This series introduces hardware offload iSCSI initiator driver for the 41000 Series Converged Network Adapters (579xx chip) by Qlogic. The overall driver design includes a common module ('qed') and protocol specific dependent modules ('qedi' for iSCSI). This is an open iSCSI driver, modifications to open iSCSI user components 'iscsid', 'iscsiuio', etc. are required for the solution to work. The user space changes are also in the process of being submitted. https://groups.google.com/forum/#!forum/open-iscsi The 'qed' common module, under drivers/net/ethernet/qlogic/qed/, is enhanced with functionality required for the iSCSI support. This series is based on: net tree base: Merge of net and net-next as of 11/29/2016 Changes from RFC v2: 1. qedi patches are squashed into single patch to prevent krobot warning. 2. Fixed 'hw_p_cpuq' incompatible pointer type. 3. Fixed sparse incompatible types in comparison expression. 4. Misc fixes with latest 'checkpatch --strict' option. 5. Remove int_mode option from MODULE_PARAM. 6. Prefix all MODULE_PARAM params with qedi_*. 7. Use CONFIG_QED_ISCSI instead of CONFIG_QEDI 8. Added bad task mem access fix. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yuval Mintz authored
This patch adds out of order packet handling for hardware offloaded iSCSI. Out of order packet handling requires driver buffer allocation and assistance. Signed-off-by: Arun Easi <arun.easi@cavium.com> Signed-off-by: Yuval Mintz <yuval.mintz@cavium.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yuval Mintz authored
This adds the backbone required for the various HW initalizations which are necessary for the iSCSI driver (qedi) for QLogic FastLinQ 4xxxx line of adapters - FW notification, resource initializations, etc. Signed-off-by: Arun Easi <arun.easi@cavium.com> Signed-off-by: Yuval Mintz <yuval.mintz@cavium.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rasmus Villemoes authored
This is already using the %pM printf extension; might as well also use %ph to make the code smaller. Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Arnd Bergmann authored
When CONFIG_INET is disabled, the new selftest results in a link error: drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.o: In function `mlx5e_test_loopback': en_selftest.c:(.text.mlx5e_test_loopback+0x2ec): undefined reference to `ip_send_check' en_selftest.c:(.text.mlx5e_test_loopback+0x34c): undefined reference to `udp4_hwcsum' This hides the specific test in that configuration. Fixes: 0952da79 ("net/mlx5e: Add support for loopback selftest") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Borkmann authored
After 326fe02d ("net/mlx4_en: protect ring->xdp_prog with rcu_read_lock"), the rcu_read_lock() in bpf_prog_run_xdp() is superfluous, since callers need to hold rcu_read_lock() already to make sure BPF program doesn't get released in the background. Thus, drop it from bpf_prog_run_xdp(), as it can otherwise be misleading. Still keeping the bpf_prog_run_xdp() is useful as it allows for grepping in XDP supported drivers and to keep the typecheck on the context intact. For mlx4, this means we don't have a double rcu_read_lock() anymore. nfp can just make use of bpf_prog_run_xdp(), too. For qede, just move rcu_read_lock() out of the helper. When the driver gets atomic replace support, this will move to call-sites eventually. mlx5 needs actual fixing as it has the same issue as described already in 326fe02d ("net/mlx4_en: protect ring->xdp_prog with rcu_read_lock"), that is, we're under RCU bh at this time, BPF programs are released via call_rcu(), and call_rcu() != call_rcu_bh(), so we need to properly mark read side as programs can get xchg()'ed in mlx5e_xdp_set() without queue reset. Fixes: 86994156 ("net/mlx5e: XDP fast RX drop bpf programs support") Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Soheil Hassas Yeganeh authored
Only when ICMP packets are enqueued onto the error queue, sk_err is also set. Before f5f99309 (sock: do not set sk_err in sock_dequeue_err_skb), a subsequent error queue read would set sk_err to the next error on the queue, or 0 if empty. As no error types other than ICMP set this field, sk_err should not be modified upon dequeuing them. Only for ICMP errors, reset the (racy) sk_err. Some applications, like traceroute, rely on it and go into a futile busy POLLERR loop otherwise. In principle, sk_err has to be set while an ICMP error is queued. Testing is_icmp_err_skb(skb_next) approximates this without requiring a full queue walk. Applications that receive both ICMP and other errors cannot rely on this legacy behavior, as other errors do not set sk_err in the first place. Fixes: f5f99309 (sock: do not set sk_err in sock_dequeue_err_skb) Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: Willem de Bruijn <willemb@google.com> Acked-by: Eric Dumazet <edumazet@google.com> Acked-by: Maciej Żenczykowski <maze@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Thomas Graf says: ==================== bpf: BPF for lightweight tunnel encapsulation This series implements BPF program invocation from dst entries via the lightweight tunnels infrastructure. The BPF program can be attached to lwtunnel_input(), lwtunnel_output() or lwtunnel_xmit() and see an L3 skb as context. Programs attached to input and output are read-only. Programs attached to lwtunnel_xmit() can modify and redirect, push headers and redirect packets. The facility can be used to: - Collect statistics and generate sampling data for a subset of traffic based on the dst utilized by the packet thus allowing to extend the existing realms. - Apply additional per route/dst filters to prohibit certain outgoing or incoming packets based on BPF filters. In particular, this allows to maintain per dst custom state across multiple packets in BPF maps and apply filters based on statistics and behaviour observed over time. - Attachment of L2 headers at transmit where resolving the L2 address is not required. - Possibly many more. v3 -> v4: - Bumped LWT_BPF_MAX_HEADROOM from 128 to 256 (Alexei) - Renamed bpf_skb_push() helper to bpf_skb_change_head() to relate to existing bpf_skb_change_tail() helper (Alexei/Daniel) - Added check in __bpf_redirect_common() to verify that program added a link header before redirecting to a l2 device. Adding the check to lwt-bpf code was considered but dropped due to massive code required due to retrieval of net_device via per-cpu redirect buffer. A test case was added to cover the scenario when a program directs to an l2 device without adding an appropriate l2 header. (Alexei) - Prohibited access to tc_classid (Daniel) - Collapsed bpf_verifier_ops instance for lwt in/out as they are identical (Daniel) - Some cosmetic changes v2 -> v3: - Added real world sample lwt_len_hist_kern.c which demonstrates how to collect a histogram on packet sizes for all packets flowing through a number of routes. - Restricted output to be read-only. Since the header can no longer be modified, the rerouting functionality has been removed again. - Added test case which cover destructive modification of packet data. v1 -> v2: - Added new BPF_LWT_REROUTE return code for program to indicate that new route lookup should be performed. Suggested by Tom. - New sample to illustrate rerouting - New patch 05: Recursion limit for lwtunnel_output for the case when user creates circular dst redirection. Also resolves the issue for ILA. - Fix to ensure headroom for potential future L2 header is still guaranteed ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
Adds a series of tests to verify the functionality of attaching BPF programs at LWT hooks. Also adds a sample which collects a histogram of packet sizes which pass through an LWT hook. $ ./lwt_len_hist.sh Starting netserver with host 'IN(6)ADDR_ANY' port '12865' and family AF_UNSPEC MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.253.2 () port 0 AF_INET : demo Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.00 39857.69 1 -> 1 : 0 | | 2 -> 3 : 0 | | 4 -> 7 : 0 | | 8 -> 15 : 0 | | 16 -> 31 : 0 | | 32 -> 63 : 22 | | 64 -> 127 : 98 | | 128 -> 255 : 213 | | 256 -> 511 : 1444251 |******** | 512 -> 1023 : 660610 |*** | 1024 -> 2047 : 535241 |** | 2048 -> 4095 : 19 | | 4096 -> 8191 : 180 | | 8192 -> 16383 : 5578023 |************************************* | 16384 -> 32767 : 632099 |*** | 32768 -> 65535 : 6575 | | Signed-off-by: Thomas Graf <tgraf@suug.ch> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
Registers new BPF program types which correspond to the LWT hooks: - BPF_PROG_TYPE_LWT_IN => dst_input() - BPF_PROG_TYPE_LWT_OUT => dst_output() - BPF_PROG_TYPE_LWT_XMIT => lwtunnel_xmit() The separate program types are required to differentiate between the capabilities each LWT hook allows: * Programs attached to dst_input() or dst_output() are restricted and may only read the data of an skb. This prevent modification and possible invalidation of already validated packet headers on receive and the construction of illegal headers while the IP headers are still being assembled. * Programs attached to lwtunnel_xmit() are allowed to modify packet content as well as prepending an L2 header via a newly introduced helper bpf_skb_change_head(). This is safe as lwtunnel_xmit() is invoked after the IP header has been assembled completely. All BPF programs receive an skb with L3 headers attached and may return one of the following error codes: BPF_OK - Continue routing as per nexthop BPF_DROP - Drop skb and return EPERM BPF_REDIRECT - Redirect skb to device as per redirect() helper. (Only valid in lwtunnel_xmit() context) The return codes are binary compatible with their TC_ACT_ relatives to ease compatibility. Signed-off-by: Thomas Graf <tgraf@suug.ch> Acked-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
A route on the output path hitting a RTN_LOCAL route will keep the dst associated on its way through the loopback device. On the receive path, the dst_input() call will thus invoke the input handler of the route created in the output path. Thus, lwt redirection for input must be done for dsts allocated in the otuput path as well. Also, if a route is cached in the input path, the allocated dst should respect lwtunnel configuration on the nexthop as well. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
orig_output for IPv4 was only set for dsts which hit an input route. Set it consistently for locally generated traffic as well to allow lwt to continue the dst_output() path as configured by the nexthop. Fixes: 25368623 ("lwt: Add support to redirect dst.input") Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Saeed Mahameed says: ==================== Mellanox 100G mlx5 updates 2016-11-29 The following series from Tariq and Roi, provides some critical fixes and updates for the mlx5e driver. From Tariq: - Fix driver coherent memory huge allocation issues by fragmenting completion queues, in a way that is transparent to the netdev driver by providing a new buffer type "mlx5_frag_buf" with the same access API. - Create UMR MKey per RQ to have better scalability. From Roi: - Some fixes for the encap-decap support and tc flower added lately to the mlx5e driver. v1->v2: - Fix start index in error flow of mlx5_frag_buf_alloc_node, pointed out by Eric. This series was generated against commit: 31ac1c19 ("geneve: fix ip_hdr_len reserved for geneve6 tunnel.") ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Roi Dayan authored
Handling flow encap entry should be inside tc del flow and is only relevant for offloaded eswitch TC rules. Fixes: 11a457e9b6c1 ("net/mlx5e: Add basic TC tunnel set action for SRIOV offloads") Signed-off-by: Roi Dayan <roid@mellanox.com> Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Roi Dayan authored
Change the function that deletes offloaded TC rule to get struct mlx5e_tc_flow instance which contains both the flow handle and flow attributes. This is a cleanup needed for downstream patches, it doesn't change any functionality. Signed-off-by: Roi Dayan <roid@mellanox.com> Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Roi Dayan authored
According to the reverse unwinding principle, on delete time we should first handle deletion of the steering rule and later handle the vlan deletion from the eswitch. Fixes: 8b32580d ("net/mlx5e: Add TC vlan action for SRIOV offloads") Signed-off-by: Roi Dayan <roid@mellanox.com> Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Roi Dayan authored
We will never find a flow with the same cookie as cls_flower always allocates a new flow and the cookie is the allocated memory address. Fixes: e3a2b7ed ("net/mlx5e: Support offload cls_flower with drop action") Signed-off-by: Roi Dayan <roid@mellanox.com> Reviewed-by: Hadar Hen Zion <hadarh@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-