- 18 Feb, 2016 40 commits
-
-
Alexander Duyck authored
Recent changes should have enabled support for IPv6 based tunnels and support for TSO with outer UDP checksums. As such we can update the feature flags to reflect that. In addition we can clean-up the flags that aren't needed such as SCTP and RXCSUM since having the bits there doesn't add any value. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
All of the documentation in the datasheets for the XL710 do not call out any reason to exclude support for IPv6 based tunnels. As such I am dropping the code that was excluding these tunnel types from having their port numbers recognized. This way we can take advantage of things such as checksum offload for inner headers over IPv6 based VXLAN or GENEVE tunnels. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This patch contains a number of fixes to make certain that we are using the correct protocols when parsing both the inner and outer headers of a frame that is mixed between IPv4 and IPv6 for inner and outer. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Acked-by: Kiran Patil <kiran.patil@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
The XL722 has support for providing the outer UDP tunnel checksum on transmits. Make use of this feature to support segmenting UDP tunnels with outer checksums enabled. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This is mostly a minor clean-up for the Rx checksum path in order to avoid some of the unnecessary conditional checks that were being applied. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
Add exception handling to the Tx checksum path so that we can handle cases of TSO where the frame is bad, or Tx checksum where we didn't recognize a protocol Drop I40E_TX_FLAGS_CSUM as it is unused, move the CHECKSUM_PARTIAL check into the function itself so that we can decrease indent. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This patch defers writing to the Tx descriptor bits until we know we have successfully completed a given operation. So for example we defer updating the tunnelling portion of the context descriptor until we have fully identified the type. The advantage to this approach is that we can assemble values as we go instead of having to try and kludge everything together all at once. As a result we can significantly clean up the tunneling configuration for instance as we can just do a pointer walk and do the math for the distance between each set of points. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This patch adds support for IPv6 extension headers in setting up the Tx checksum. Without this patch extension headers would cause IPv6 traffic to fail as the transport protocol could not be identified. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This patch fixes two issues. First was the fact that iphdr(skb)->protocl was being used to test for the outer transport protocol. This completely breaks IPv6 support. Second was the fact that we cleared the flag for v4 going to v6, but we didn't take care of txflags going the other way. As such we would have the v6 flag still set even if the inner header was v4. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
The Tx checksum path was maintaining a set of 3 pointers and two lengths in order to prepare the packet for being checksummed. The thing is we only really needed 2 pointers, and the lengths that were being maintained can easily be computed. As such we can replace the IPv4 and IPv6 header pointers with one single union that represents both, or a generic pointer to the start of the network header. For the L4 headers we can do the same with TCP and a generic pointer to the start of the transport header. The length of the TCP header is obtained by simply multiplying doff by 4, and the network header length can be obtained by subtracting the network header pointer from the transport header pointer. While I was at it I renamed l4_hdr to l4_proto to make it a bit more clear and less likely to be confused with l4.hdr which is the transport header pointer. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This patch goes through and pulls all of the spots where we were updating either the TCP or IP checksums in the TSO and checksum path into the TSO function. The general idea here is that we should only be updating the header after we verify we have completed a skb_cow_head check to verify the head is writable. One other advantage to doing this is that it makes things much more obvious. For example, in the case of IPv6 there was one spot where the offset of the IPv4 header checksum was being updated which is obviously incorrect. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This patch makes it so that the L4 header offsets and such can be ignored when dealing with the L3 checksum and length update. This is done making use of two things. First we can just use the offset from the L4 header to the start of the packet to determine the L4 offset, and from that we can then make use of the data offset to determine the full length of the headers. As far as adjusting the checksum to remove the length we can simply add the inverse of the length instead of having to recompute the entire pseudo-header without the length. In the case of an IPv6 header this should be significantly cheaper since we can make use of a value we already needed instead of having to read the source and destination address out of the packet. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
Instead of casing u32 values to u64 it makes more sense to just start out with u64 values in the first place. This way we don't need to create a mess with all of the casts needed to populate a 64b value. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
The i40e and i40evf drivers contained code for inserting an outer checksum on UDP tunnels. The issue however is that the upper levels of the stack never requested such an offload and it results in possible errors. In addition the same logic was being applied to the Rx side where it was attempting to validate the outer checksum, but the logic there was incorrect in that it was testing for the resultant sum to be equal to the header checksum instead of being equal to 0. Since this code is so massively flawed, and doing things that we didn't ask for it to do I am just dropping it, and will bring it back later to use as an offload for SKB_GSO_UDP_TUNNEL_CSUM which can make use of such a feature. As far as the Rx feature I am dropping it completely since it would need to be massively expanded and applied to IPv4 and IPv6 checksums for all parts, not just the one that supports Tx checksum offload for the outer. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jamal Hadi Salim authored
Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
When VLANs are created / destroyed on a VLAN filtering bridge (MASTER flag set), the configuration is passed down to the hardware. However, when only the flags (e.g. PVID) are toggled, the configuration is done in the software bridge alone. While it is possible to pass these flags to hardware when invoked with the SELF flag set, this creates inconsistency with regards to the way the VLANs are initially configured. Pass the flags down to the hardware even when the VLAN already exists and only the flags are toggled. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefan Roese authored
Add code to select SGMII-to-copper mode upon SGMII interface selection. Signed-off-by: Stefan Roese <sr@denx.de> Cc: Andrew Lunn <andrew@lunn.ch> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alison Schofield authored
divamnt stores a start_time at module init and uses it to calculate elapsed time. The elapsed time, stored in secs and usecs, is part of the trace data the driver maintains for the DIVA Server ISDN cards. No change to the format of that time data is required. To avoid overflow on 32-bit systems use ktime_get_ts64() to return the elapsed monotonic time since system boot. This is a change from real to monotonic time. Since the driver only stores elapsed time, monotonic time is sufficient and more robust against real time clock changes. These new monotonic values can be more useful for debugging because they can be easily compared to other monotonic timestamps. Note elaspsed time values will now start at system boot time rather than module load time, so they will differ slightly from previously reported values. Remove declaration and init of previously unused time constants: start_sec, start_usec. Signed-off-by: Alison Schofield <amsfield22@gmail.com> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queueDavid S. Miller authored
Jeff Kirsher says: ==================== 40GbE Intel Wired LAN Driver Updates 2016-02-17 This series contains updates to i40e/i40evf once again. Mitch updates the use of a define instead of a magic number. Adds support for packet split receive on VFs, which is disabled by default. Expands on a code comment which was not verbose or really helpful. Fixes an issue where if a reset fails to complete and was not properly setting the adapter state, which would cause a panic on rmmod, so set the adpater state to DOWN to avoid a panic. Jesse cleans up a "dump" in debugfs that never panned out to be useful. Anjali adds a workaround for cases where we might have interrupts that get lost but wright-back (WB) happened. Fixes an issue by falling back to enabling unicast, multicast and broadcast promiscuous mode when the driver must disable it's use of "default port" (defport mode) due to internal incompatibility with Multiple Function per Port (MFP). Fixes an issue where queues should never be enabled/disabled in the interrupt handler. Kiran cleans up th code which used hard coded base VEB SEID since it was removed from the specification. Shannon adds a few bits for better debug messages. Fixes an obscure corner case, where it was possible to clear the NVM update wait flag when no update_done message was actually received. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
Ever since commit 04ed3e74 ("net: change netdev->features to u32") the format string fmt_long_hex has not been used, so we may as well remove it. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Catherine Sullivan authored
Bump. Change-ID: Ie280dc67e37a1cf667c3469499a4fb90f4177b75 Signed-off-by: Catherine Sullivan <catherine.sullivan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Anjali Singhai Jain authored
In MFP mode particularly when we were setting the PF VSI in limited promiscuous, the HW switch was still mirroring the outgoing packets from other VSIs (VF/VMdq) onto the PF VSI. With this new bit set, the mirroring doesn't happen any more and so we are in limited promiscuous on the PF VSI in MFP which is similar to defport. An API check is not required, since this bit is reserved for FW API version < 1.5 Also update copyright year in file headers. Change-ID: I9840cb95f11dde733d943cb03ce84f68b9611bc8 Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Shannon Nelson authored
In one obscure corner case, it was possible to clear the NVM update wait flag when no update_done message was actually received. This patch cleans the event descriptor before use, and moves the opcode check to where it won't get done if there was no event to clean. Also update copyright year in file headers. Change-ID: I68bbc41965e93f4adf07cbe98b9dfd63d41509a4 Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Mitch Williams authored
If a reset fails to complete, the driver gets its affairs in order and awaits the cold solace of rmmod. Unfortunately, it was not properly setting the adapter state, which would cause a panic on rmmod, instead of the desired surcease. Set the adapter state to DOWN in this case, and avoid a panic. Change-ID: I6fdd9906da52e023f8dc744f7da44b5d95278ca9 Signed-off-by: Mitch Williams <mitch.a.williams@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Shannon Nelson authored
Make sure we return EBUSY while finishing up a reset, and add a few bits for better debug messages. Change-ID: I23f6c28a8d96d7aa171abcc265737cec7826c292 Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Mitch Williams authored
Explain why we cannot remove this code, even though it works differently than any of our other interrupt cause handling code. Change-ID: Ie66203bd037a466066036611c31d44f759ec5176 Signed-off-by: Mitch Williams <mitch.a.williams@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Anjali Singhai Jain authored
The queues should never be enabled/disabled in the interrupt handler, ICR0 interrupt enable should be the only thing that needs to be dynamically changed in the handler. This patch fixes that. Without this patch X722 platforms were seeing weird ping timings when in Legacy mode since it takes a whole lot of time for the HW/FW to re-enable queues. Change-ID: If065afc45d81c5a19d4a94a00cd5b8f61cefc40c Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Mitch Williams authored
In the case where we have a page fully used by receive data, we need to release the page fully to the stack. Instead of calling get_page (which increments the page count) followed by free_page (which decrements the page count), just donate our reference to the stack. Although this donation is not tax deductible, it does allow us to avoid two very expensive atomic operations that reverse each other. Change-ID: If70739792d5748995fc175ec92ac2171ed4ad8fc Signed-off-by: Mitch Williams <mitch.a.williams@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Kiran Patil authored
Fixed mapping of SEID is removed from specification. Hence this patch removes code which was using hard coded base VEB SEID. Changed FCoE code to use "hw->pf_id" to obtain correct "idx" and verified. Removed defines for BASE VSI/VEB SEID and BASE_PF_SEID since it is not used anymore. Change-ID: Id507cf4b1fae1c0145e3f08ae9ea5846ea5840de Signed-off-by: Kiran Patil <kiran.patil@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Anjali Singhai Jain authored
This patch falls back to enabling unicast, multicast and broadcast promiscuous mode when the driver must disable it's use of "default port" aka defport mode (which is normally used to provide a promiscuous mode), due to internal incompatibility with Multiple Function per Port (aka MFP). The situation that requires this patch is when Physical Function 0 is the device being used, and it can support SR-IOV when MFP is enabled, via the driver creating a VEB on an MFP enabled adapter. Change-ID: Ie90b00d0d58782a5dfcf2c3c9725a2eb90bd63d8 Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Anjali Singhai Jain authored
This patch adds a workaround for cases where we might have interrupts that got lost but WB happened. If that happens without this patch we will see a tx_timeout. To work around it, this patch goes ahead and reschedules NAPI in that situation, if NAPI is not already scheduled. We also add a counter in ethtool to keep track of when we detect a case of tx_lost_interrupt. Note: napi_reschedule() can be safely called from process/service_task context and is done in other drivers as well without an issue. Change-ID: I00f98f1ce3774524d9421227652bef20fcbd0d20 Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jesse Brandeburg authored
This patch makes use of a pointer called hw consistent in the i40e_remove function. Change-ID: Idacc7ff0a09a68289c57457a78618bf5497de077 Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Mitch Williams authored
Support packet split receive on VFs. This is off by default but can be enabled using ethtool private flags. Because we need to trigger a reset from outside of i40evf_main.c, create a new function to do so, and export it. Also update copyright year in file headers. Change-ID: I721aa5d70113d3d6d94102e5f31526f6fc57cbbb Signed-off-by: Mitch Williams <mitch.a.williams@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jesse Brandeburg authored
There was a completely unused file "dump" in debugfs that never panned out to be useful. Change-ID: I12bb9e37b5a83299725dda815a8746157baf6562 Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Mitch Williams authored
We have a define for this, use it. No functional change. Change-ID: Ic0e3ea4f562e46de63b2a8de07f291ccc10205fd Signed-off-by: Mitch Williams <mitch.a.williams@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
David S. Miller authored
Jiri Benc says: ==================== vxlan: clean up rx path, consolidating extension handling The rx path of VXLAN turned over time into kind of spaghetti code. The rx processing is split between vxlan_udp_encap_recv and vxlan_rcv but in an artificial way: vxlan_rcv is just called at the end of vxlan_udp_encap_recv, continuing the rx processing where vxlan_udp_encap_recv left it. There's no clear border between those two functions. It makes sense to combine those functions into one; this will be actually needed for VXLAN-GPE where we'll need to skip part of the processing which is hard to do with the current code. However, both functions are too long already. This patchset is shortening them, consolidating extension handling that is spread all around together and moving it to separate functions. (Later patchsets will do more consolidation in other parts of the functions with the final goal of merging vxlan_udp_encap_recv and vxlan_rcv.) In process of consolidation of the extension handling, I needed to deal with vni field in a generic way, as its lower 8 bits mean different things for different extensions. While cleaning up the code to strictly distinguish between "vni" and "vni field" (which contains vni plus an additional byte), I also converted the code not to convert endianess back and forth. The full picture can be seen at: https://github.com/jbenc/linux-vxlan/commits/master ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Benc authored
For metadata based tunnels, VNI is ignored when doing vxlan device lookups (because such tunnel receives all VNIs). However, this was not honored by vxlan_xmit_one when doing encapsulation bypass. Move the check for metadata based tunnel to the common place where it belongs. Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Benc authored
When there are unrecognized flags present in the vxlan header, it doesn't make much sense to return the packet for further UDP processing, especially considering that for other invalid flag combinations we drop the packet because of previous checks. This means we return positive value only at the beginning of the function where tun_dst is not yet allocated. This allows us to get rid of the bad_flags and error jump labels. When we're dropping packet, we need to free tun_dst now. Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Benc authored
Bring the extension handling to a single place and move the actual handling logic out of vxlan_udp_encap_recv as much as possible. Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Benc authored
To make vxlan_udp_encap_recv shorter and more comprehensible. Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-