- 25 Apr, 2016 39 commits
-
-
Michal Kazior authored
This strips out qdisc specific bits from the code and makes it slightly more reusable. Codel will be used by wireless/mac80211 in the future. Signed-off-by: Michal Kazior <michal.kazior@tieto.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Phil Sutter authored
Signed-off-by: Phil Sutter <phil@nwl.cc> Acked-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Benc authored
Commit 751a587a ("route: fix breakage after moving lwtunnel state") moved lwtstate to the end of dst_entry for 32bit archs. This makes it share the cacheline with __refcnt which had an unkown effect on performance. For this reason, the pointer was kept in place for 64bit archs. However, later performance measurements showed this is of no concern. It turns out that every performance sensitive path that accesses lwtstate accesses also struct rtable or struct rt6_info which share the same cache line. Thus, to get rid of a few #ifdefs, move the field to the end of the struct also for 64bit. Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Yuval Mintz says: ==================== qed*: driver updates [Was previous termed 'eeprom access et al.', but seemed a bit inappropriate given we've dropped the eeprom patch for now. Still waiting for some inputs on that one, BTW] This patch series contains some ethtool-related enhancements. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sudarsana Reddy Kalluru authored
The APIs for making this sort of configuration [e.g., via ethtool] are already present in qede, but the current configuration flow in qed doesn't respect it. Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yuval Mintz authored
There's some inconsistency in current logic determining whether the link settings of a given interface can be changed; I.e., in all modes other than the so-called `deault' mode the interfaces are forbidden from changing the configuration - but even this rule is not applied to all user APIs that may change the configuration. Instead, let the core-module [qed] decide whether an interface can change the configuration by supporting a new API function. We also revise the current rule, allowing all interfaces to change their configurations while laying the infrastructure for future modes where an interface would be blocked from making such a configuration. Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yuval Mintz authored
Adds a getter for the interfaces private flags. The only parameter currently supported is whether the interface is a coupled function [required for supporting 100g]. Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yuval Mintz authored
There's a difference in statsitics' names starting at qed and propagating to qede, where egress counters indicate ranges while ingress counters indiciate high-end. Align all statistcs to follow the same conventions - name indicates range. Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
We should call consume_skb(skb) when skb is properly consumed, or kfree_skb(skb) when skb must be dropped in error case. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
We now have proper per-listener but also per network namespace counters for SYN packets that might be dropped. We replace the kfree_skb() by consume_skb() to be drop monitor [1] friendly, and remove an obsolete comment. FastOpen SYN packets can carry payload in them just fine. [1] perf record -a -g -e skb:kfree_skb sleep 1; perf report Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queueDavid S. Miller authored
Jeff Kirsher says: ==================== 10GbE Intel Wired LAN Driver Updates 2016-04-25 This series contains updates to ixgbe and ixgbevf. Emil provides several patches, starting with the consolidation of the logic behind configuring spoof checking. Fixed an issue which was causing link issues for backplane devices because x550em_a/x devices did not have a default value for mac->ops.setup_link. Refactored the ethtool stats to bring the logic closer to how ixgbe handles stats and sets up per-queue stats for ixgbevf. Mark adds a new register to wait for previous register writes to complete before issuing a register read, which is needed when slower links are in use. Fixed the flow control setup for x550em_a, the incorrect fc_setup function was being used. Don added a workaround for empty SFP+ cage crosstalk, since on some systems the crosstalk could lead to link flap on empty SFP+ cages. Jake converts ixgbe and ixgbevf to use the BIT() macro. Alex Duyck adds support for partial GSO segmentation in the case of tunnels for ixgbe and ixgbevf. Then preps for HyperV by moving the API negotiation into mac_ops. Arnd Bergmann provides a fix for the ARM compile warnings in linux-next by converting the use of a udelay() to msleep(). ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Nicolas Dichtel says: ==================== netlink: align attributes when needed (patchset #2) This is the continuation (series #2) of the work done to align netlink attributes when these attributes contain some 64-bit fields. In patch #3, I didn't modify the function ila_encap_nlsize(). I was waiting feedback for this patch: http://patchwork.ozlabs.org/patch/613766/ If it's approved, there will be an update to switch nla_total_size() to nla_total_size_64bit() after the merge of net in net-next. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Craig Gallek authored
d894ba18 ("soreuseport: fix ordering for mixed v4/v6 sockets") was merged as a bug fix to the net tree. Two conflicting changes were committed to net-next before the above fix was merged back to net-next: ca065d0c ("udp: no longer use SLAB_DESTROY_BY_RCU") 3b24d854 ("tcp/dccp: do not touch listener sk_refcnt under synflood") These changes switched the datastructure used for TCP and UDP sockets from hlist_nulls to hlist. This patch applies the necessary parts of the net tree fix to net-next which were not automatic as part of the merge. Fixes: 1602f49b ("Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net") Signed-off-by: Craig Gallek <kraig@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Valdis reported tons of stack dumps caused by WARN_ON() in sock_owned_by_user() This test needs to be relaxed if/when lockdep disables itself. Note that other lockdep_sock_is_held() callers are all from rcu_dereference_protected() sections which already are disabled if/when lockdep has been disabled. Fixes: fafc4e1e ("sock: tigthen lockdep checks for sock_owned_by_user") Reported-by: Valdis Kletnieks <Valdis.Kletnieks@vt.edu> Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Arnd Bergmann authored
The newly added x550em_a support causes a link failure on ARM because of an overly long time passed into udelay(): ERROR: "__bad_udelay" [drivers/net/ethernet/intel/ixgbe/ixgbe.ko] undefined! There are multiple variants of the ixgbe_acquire_swfw_sync_*() function, and the other ones all use msleep(), so we can safely assume that all callers are allowed to sleep, which makes msleep() a better replacement than mdelay(). Signed-off-by: Arnd Bergmann <arnd@arndb.de> Fixes: 49425dfc ("ixgbe: Add support for x550em_a 10G MAC type") Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This patch moves API negotiation into mac_ops. The general idea here is that with HyperV on the way we need to make certain that anything that will have different versions between HyperV and a standard VF needs to be abstracted enough so that we can have a separate function between the two so we can avoid changes in one breaking something in the other. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This patch adds support for partial GSO segmentation in the case of tunnels. Specifically with this change the driver an perform segmentation as long as the frame either has IPv6 inner headers, or we are allowed to mangle the IP IDs on the inner header. This is needed because we will not be modifying any fields from the start of the start of the outer transport header to the start of the inner transport header as we are treating them like they are just a block of IP options. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jacob Keller authored
Also cleanup a case where we're bit shifting a value into place, and use an unsigned constant. Make use of the unsigned postfix in places where BIT() macro is not appropriate. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jacob Keller authored
Make use of GENMASK instead of open coding the equivalent operation incorrectly. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jacob Keller authored
Several areas of ixgbe were written before widespread usage of the BIT(n) macro. With the impending release of GCC 6 and its associated new warnings, some usages such as (1 << 31) have been noted within the ixgbe driver source. Fix these wholesale and prevent future issues by simply using BIT macro instead of hand coded bit shifts. Also fix a few shifts that are shifting values into place by using the 'u' prefix to indicate unsigned. It doesn't strictly matter in these cases because we're not shifting by too large a value, but these are all unsigned values and should be indicated as such. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Don Skidmore authored
It is possible on some systems that crosstalk could lead to link flap on empty SFP+ cages. A new NVM bit was defined to let SW know it needs to implement the work around which consists of verifying that there is a module in the cage before acting on the LSC. Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Mark Rustad authored
Somehow the wrong fc_setup function was used for x550em_a, so correct that. Also set setup_link to NULL as its value is determined later, just like it is with X550EM_x. Signed-off-by: Mark Rustad <mark.d.rustad@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Emil Tantilov authored
Implement per-queue statistics for packets, bytes and busy poll specific counters. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Emil Tantilov authored
This brings the logic closer to how we handle the stats in ixgbe and it sets us up for introducing per-queue stats. Use IXGBEVF_STAT and IXGBEVF_NETDEV_STAT for accessing the driver and netdev stats respectively. This way we don't have to calculate the stats based on register values which could lead to the counters not being initialized properly when the interface is down. IXGBEVF_QUEUE_STATS_LEN is set to include the number of queues. Also some defines were renamed to use the IXGBEVF prefix. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Mark Rustad authored
Use a new register to wait for previous register writes to complete before issuing a register read. This is needed when slower links are in use. Signed-off-by: Mark Rustad <mark.d.rustad@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Haiyang Zhang authored
RNDIS_STATUS_NETWORK_CHANGE event is handled as two "half events" -- media disconnect & connect. The second half should be added to the list head, not to the tail. So all events are processed in normal order. Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Reviewed-by: K. Y. Srinivasan <kys@microsoft.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sridhar Samudrala authored
This field is used to record the RX queue index for a redirect action passed via ring_cookie field in struct ethtool_rx_flow_spec which is a u64 value. For ex: after adding a filter rule to redirect to a VF using ethtool # echo 4 > /sys/class/net/p4p1/device/sriov_numvfs # ethtool -N p4p1 flow-type ip4 src-ip 192.168.0.1 action 0x100000000 querying for the rule shows the Action as 'Direct to queue 0' # ethtool -n p4p1 4 RX rings available Total 1 rules Filter: 2045 Rule Type: Raw IPv4 Src IP addr: 192.168.0.1 mask: 0.0.0.0 Dest IP addr: 0.0.0.0 mask: 255.255.255.255 TOS: 0x0 mask: 0xff Protocol: 0 mask: 0xff L4 bytes: 0x0 mask: 0xffffffff VLAN EtherType: 0x0 mask: 0xffff VLAN: 0x0 mask: 0xffff User-defined: 0x0 mask: 0xffffffffffffffff Action: Direct to queue 0 With this fix, ethtool will report the right queue index even for VFs. Action: Direct to queue 4294967296 Here 4294967296 corresponds to 0x100000000. We need to update 'ethtool' to report the queue index as a Hex value so that it is more user friendly and matches with the 'action' value that is passed when adding the rule. Signed-off-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Emil Tantilov authored
X550EM_a/x did not have a default value for mac->ops.setup_link which was causing link issues for backplane devices. This patch sets mac->ops.setup_link to ixgbe_setup_mac_link_X540 for X550EM_a/x which is also default for X550. This will result in mac->ops.setup_link calling the link setup function for the respective PHY type in case we do not need a special function to deal with it. Reported-by: Ken Cox <jkc@redhat.com> Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Emil Tantilov authored
Previously the PF driver would only set VLAN spoof checking if the VF had created VLANs. This was done by setting and checking a counter (vlan_count) whenever a VLAN was created by the VF. However it is possible for the vlan_count to be !=0 while there are no VLANs assigned to the VF due to the count incrementing every time a VLAN 0 is added on ifdown/up, which resulted in VLAN spoofing always being set for those VFs. This patch cleans up the logic by unconditionally setting VLAN based on how the VF is configured (via ip link set ethX vf Y spoofchk on/off). This change also resolves an issue where the VLAN spoofing can remain set even after being disabled by the user due to the driver enabling VLAN spoof checking every time a VLAN is added to the VF, but would only allow changes in the setting if vlan_count != 0. Also default_vf_vlan_id and vlans_enabled were removed from the vf_data_storage structure since they are not being used in the driver. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Emil Tantilov authored
Consolidate the logic behind configuring spoof checking: Move the setting of the MAC, VLAN and Ethertype spoof checking into ixgbe_ndo_set_vf_spoofchk(). Change ixgbe_set_mac_anti_spoofing() to set MAC spoofing per VF similar to the VLAN and Ethertype functions - this allows us to call the helper functions in ixgbe_ndo_set_vf_spoofchk() for all spoof check types and only disable MAC spoof checking when creating MACVLAN. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 24 Apr, 2016 1 commit
-
-
Eric Dumazet authored
Linux TCP stack painfully segments all TSO/GSO packets before retransmits. This was fine back in the days when TSO/GSO were emerging, with their bugs, but we believe the dark age is over. Keeping big packets in write queues, but also in stack traversal has a lot of benefits. - Less memory overhead, because write queues have less skbs - Less cpu overhead at ACK processing. - Better SACK processing, as lot of studies mentioned how awful linux was at this ;) - Less cpu overhead to send the rtx packets (IP stack traversal, netfilter traversal, drivers...) - Better latencies in presence of losses. - Smaller spikes in fq like packet schedulers, as retransmits are not constrained by TCP Small Queues. 1 % packet losses are common today, and at 100Gbit speeds, this translates to ~80,000 losses per second. Losses are often correlated, and we see many retransmit events leading to 1-MSS train of packets, at the time hosts are already under stress. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-