- 28 Mar, 2017 5 commits
-
-
Mahesh Bandewar authored
bond_miimon_commit() marks the link UP after attempting to get the speed and duplex settings for the link. There is a possibility that bond_update_speed_duplex() could fail. This is another place where it could result into an inconsistent bonding link state. With this patch the link will be marked UP only if the speed and duplex values retrieved have sane values and processed further. Signed-off-by: Mahesh Bandewar <maheshb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mahesh Bandewar authored
bond_update_speed_duplex() retrieves speed and duplex settings. There is a possibility of failure in retrieving these values but caller has to assume it's always successful. This leads to having inconsistent slave link settings. If these (speed, duplex) values cannot be retrieved, then keeping the link UP causes problems. The updated bond_update_speed_duplex() returns 0 on success if it retrieves sane values for speed and duplex. On failure it returns 1 and marks the link down. Signed-off-by: Mahesh Bandewar <maheshb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mahesh Bandewar authored
The primary issue is that mii-inspect phase updates link-state and expects changes to be committed during the mii-commit phase. After the inspect phase if it fails to acquire rtnl-mutex, the commit phase (bond_mii_commit) doesn't get to run. This partially updated state stays and makes the internal-state inconsistent. e.g. setup bond0 => slaves: eth1, eth2 eth1 goes DOWN -> UP mii_monitor() mii-inspect() bond_set_slave_link_state(eth1, UP, DontNotify) rtnl_trylock() <- fails! Next mii-monitor round eth1: No change mii_monitor() mii-inspect() eth1->link == current-status (ethtool_ops->get_link) no-change-detected End result: eth1: Link = BOND_LINK_UP Speed = 0xfffff [SpeedUnknown] Duplex = 0xff [DuplexUnknown] This doesn't always happen but for some unlucky machines in a large set of machines it creates problems. The fix for this is to avoid making changes during inspect phase and postpone them until acquiring the rtnl-mutex / invoking commit phase. Signed-off-by: Mahesh Bandewar <maheshb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mahesh Bandewar authored
Split the function into two (a) propose (b) commit phase without changing the semantics for the original API. Signed-off-by: Mahesh Bandewar <maheshb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queueDavid S. Miller authored
Jeff Kirsher says: ==================== 40GbE Intel Wired LAN Driver Updates 2017-03-27 This series contains updates to i40e and i40evf only. Alex updates the driver code so that we can do bulk updates of the page reference count instead of just incrementing it by one reference at a time. Fixed an issue where we were not resetting skb back to NULL when we have freed it. Cleaned up the i40e_process_skb_fields() to align with other Intel drivers. Removed FCoE code, since it is not supported in any of the Fortville/Fortpark hardware, so there is not much point of carrying the code around, especially if it is broken and untested. Harshitha fixes a bug in the driver where the calculation of the RSS size was not taking into account the number of traffic classes enabled. Robert fixes a potential race condition during VF reset by eliminating IOMMU DMAR Faults caused by VF hardware and when the OS initiates a VF reset and before the reset is finished we modify the VF's settings. Bimmy removes a delay that is no longer needed, since it was only needed for preproduction hardware. Colin King fixes null pointer dereference, where VSI was being dereferenced before the VSI NULL check. Jake fixes an issue with the recent addition of the "client code" to the driver, where we attempt to use an uninitialized variable, so correctly initialize the params variable by calling i40e_client_get_params(). v2: dropped patch 5 of the original series from Carolyn since we need more documentation and reason why the added delay, so Carolyn is taking the time to update the patch before we re-submit it for kernel inclusion. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 27 Mar, 2017 20 commits
-
-
Jacob Keller authored
Probably due to some mis-merging fix a bug associated with commits d7ce6422 ("i40e: don't check params until after checking for client instance", 2017-02-09) and 3140aa9a78c9 ("i40e: KISS the client interface", 2017-03-14) The first commit tried to move the initialization of the params structure so that we didn't bother doing this if we didn't have a client interface. You can already see that it looks fishy because of the indentation. The second commit refactors a bunch of the interface, and incorrectly drops the params initialization. I believe what occurred is that internally the two patches were re-ordered, and the merge conflicts as a result were performed incorrectly. Fix the use of an uninitialized variable by correctly initializing the params variable via i40e_client_get_params(). Reported-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Colin Ian King authored
VSI is being dereferenced before the VSI null check; if VSI is null we end up with a null pointer dereference. Fix this by performing VSI deference after the VSI null check. Also remove the need for using adapter by using vsi->back->cinst. Detected by CoverityScan, CID#1419696, CID#1419697 ("Dereference before null check") Fixes: ed0e894d ("i40evf: add client interface") Signed-off-by: Colin Ian King <colin.king@canonical.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
Since FCoE isn't supported by the i40e products there isn't much point in carrying around code that will always evaluate to false. This patch goes through and strips out the code in several spots so that we don't go around carrying variables and/or code that is always going to evaluate to false or 0. Change-ID: I39d1d779c66c638b75525839db2b6208fdc809d7 Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
Looking over the code for FCoE it looks like the Rx path has been broken at least since the last major Rx refactor almost a year ago. It seems like FCoE isn't supported for any of the Fortville/Fortpark hardware so there isn't much point in carrying the code around, especially if it is broken and untested. Change-ID: I892de8fa551cb129ce2361e738ff82ce55fa229e Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This is a minor clean-up to make the i40e/i40evf process_skb_fields function look a little more like what we have in igb. The Rx checksum function called out a need for skb->protocol but I can't see where it actually needs it. I am assuming this is something that was likely refactored out some time ago as the Rx checksum code has gone through a few rewrites. Change-ID: I0b4668a34d90b61b66ded7c7c26e19a3e2d06251 Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Bimmy Pujari authored
Removed no longer needed delays. At preproduction stage those delays were needed but now these delays are not needed. Signed-off-by: Bimmy Pujari <bimmy.pujari@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Robert Konklewski authored
First, this patch eliminates IOMMU DMAR Faults caused by VF hardware. This is done by enabling VF hardware only after VSI resources are freed. Otherwise, hardware could DMA into memory that is (or just has been) being freed. Then, the VF driver is activated only after VSI resources have been reallocated. That's because the VF driver can request resources immediately after it's activated. So they need to be ready at that point. The second race condition happens when the OS initiates a VF reset, and then before it's finished modifies VF's settings by changing its MAC, VLAN ID, bandwidth allocation, anti-spoof checking, etc. These functions needed to be blocked while VF is undergoing reset. Otherwise, they could operate on data structures that had just been freed or not yet fully initialized. Change-ID: I43ba5a7ae2c9a1cce3911611ffc4598ae33ae3ff Signed-off-by: Robert Konklewski <robertx.konklewski@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
We need to reset skb back to NULL when we have freed it in the Rx cleanup path. I found one spot where this wasn't occurring so this patch fixes it. Change-ID: Iaca68934200732cd4a63eb0bd83b539c95f8c4dd Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Harshitha Ramamurthy authored
There exists a bug in the driver where the calculation of the RSS size was not taking into account the number of traffic classes enabled. This patch factors in the traffic classes both in the initial configuration of the table as well as reconfiguration. Change-ID: I34dcd345ce52faf1d6b9614bea28d450cfd5f621 Signed-off-by: Harshitha Ramamurthy <harshitha.ramamurthy@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
Update the driver code so that we do bulk updates of the page reference count instead of just incrementing it by one reference at a time. The advantage to doing this is that we cut down on atomic operations and this in turn should give us a slight improvement in cycles per packet. In addition if we eventually move this over to using build_skb the gains will be more noticeable. I also found and fixed a store forwarding stall from where we were assigning "*new_buff = *old_buff". By breaking it up into individual copies we can avoid this and as a result the performance is slightly improved. Change-ID: I1d3880dece4133eca3c32423b04a5467321ccc52 Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
David Lebrun authored
When CONFIG_IPV6_SEG6_LWTUNNEL is selected, automatically select DST_CACHE. This allows to remove multiple ifdefs. Signed-off-by: David Lebrun <david.lebrun@uclouvain.be> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tobias Klauser authored
The ibmvnic driver keeps its statistics in net_device->stats, so the net_stats member in struct ibmvnic_adapter is unused. Remove it. Signed-off-by: Tobias Klauser <tklauser@distanz.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tobias Klauser authored
The ibmveth driver keeps its statistics in net_device->stats, so the stats member in struct ibmveth_adapter is unused. Remove it. Signed-off-by: Tobias Klauser <tklauser@distanz.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tobias Klauser authored
The bfin_mac driver keeps its statistics in net_device->stats, so the stats member in struct bfin_mac_local is unused. Remove it, as well as the accompanying comment. Cc: adi-buildroot-devel@lists.sourceforge.net Signed-off-by: Tobias Klauser <tklauser@distanz.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
ndev is being checked to see if it is a null pointer however before the null check ndev is being dereferenced; hence there is a potential null pointer dereference bug that needs fixing. Fix this by only dereferencing ndev after the null check. Detected by CoverityScan, CID#1420760, CID#140761 ("Dereference before null check") Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Philippe Reynes authored
The ethtool api {get|set}_settings is deprecated. We move this driver to new api {get|set}_link_ksettings. As I don't have the hardware, I'd be very pleased if someone may test this patch. Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Philippe Reynes authored
The ethtool api {get|set}_settings is deprecated. We move this driver to new api {get|set}_link_ksettings. As I don't have the hardware, I'd be very pleased if someone may test this patch. Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
David Ahern says: ==================== net: mpls: multipath route cleanups When a device associated with a nexthop is deleted, the nexthop in the route is effectively removed, so remove it from the route dump. Further, when all nexhops have been deleted the route is effectively done, so remove the route. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Ahern authored
When all devices for all nexthops in a route have been deleted, the route is effectively dead, so remove it. Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Acked-by: Roopa Prabhu <roopa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Ahern authored
If the device for a nexthop in a multipath route is deleted, the nexthop is effectively removed from the route. Currently, a route dump still returns the nexhop though without the device set: $ ip -f mpls ro ls 100 nexthopvia inet 10.11.1.2 dev br0 nexthopvia inet 10.100.3.1 dev eth3 $ ip li del br0 $ ip -f mpls ro ls 100 nexthopvia inet 10.11.1.2 dev * dead linkdown nexthopvia inet 10.100.3.1 dev eth3 Since the nexthop is effectively deleted, drop the hop from the route dump. Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Acked-by: Roopa Prabhu <roopa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 26 Mar, 2017 10 commits
-
-
David S. Miller authored
K. Y. Srinivasan says: ==================== netvsc: Fix miscellaneous issues Fix miscellaneous issues. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
K. Y. Srinivasan authored
Initialize the return value correctly. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
K. Y. Srinivasan authored
All netvsc channels are handled via NAPI. Setup the "read mode" correctly for the netvsc sub-channels. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Jonas Bonn says: ==================== GTP SGSN-side tunnel Changes since v4: * Respin the series on top of net-next; the conflicts were trivial, amounting to just code having been shifted about ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jonas Bonn authored
The GTP-tunnel driver is explicitly GGSN-side as it searches for PDP contexts based on the incoming packets _destination_ address. If we want to place ourselves on the SGSN side of the tunnel, then we want to be identifying PDP contexts based on _source_ address. Let it be noted that in a "real" configuration this module would never be used: the SGSN normally does not see IP packets as input. The justification for this functionality is for PGW load-testing applications where the input to the SGSN is locally generally IP traffic. This patch adds a "role" argument at GTP-link creation time to specify whether we are on the GGSN or SGSN side of the tunnel; this flag is then used to determine which part of the IP packet to use in determining the PDP context. Signed-off-by: Jonas Bonn <jonas@southpole.se> Acked-by: Pablo Neira Ayuso <pablo@netfilter.org> Acked-by: Harald Welte <laforge@gnumonks.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jonas Bonn authored
This is a mostly cosmetic rename of the SGSN netlink attribute to the GTP link. The justification for this is that we will be making the module support decapsulation of "downstream" SGSN packets, in which case the netlink parameter actually refers to the upstream GGSN peer. Renaming the parameter makes the relationship clearer. The legacy name is maintained as a define in the header file in order to not break existing code. Signed-off-by: Jonas Bonn <jonas@southpole.se> Acked-by: Pablo Neira Ayuso <pablo@netfilter.org> Acked-by: Harald Welte <laforge@gnumonks.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Daniele Palmas says: ==================== net: usb: qmi_wwan: add qmap mux protocol support This patch adds support for qmap mux protocol available in recent Qualcomm based modems. The qmap mux protocol can be used for multiplexing data packets in order to have multiple ip streams through the same physical device. Two new sysfs files are added for adding/removing the qmap mux based interfaces (named qmimux): /sys/class/net/<iface>/qmi/add_mux /sys/class/net/<iface>/qmi/del_mux Main patch author is Bjørn Mork <bjorn@mork.no> An userspace implementation of the qmi requests needed to support multiple ip streams is already available (namely libqmi since version 1.18.0). The qmap mux feature has been recently implemented in Codeaurora gobinet out-of-kernel driver that was the inspiration for this development. Tests have been performed with Telit LE922A6 (PID 0x1040) ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniele Palmas authored
This patch updates the documentation related to the new files added for qmap mux support. Signed-off-by: Daniele Palmas <dnlplm@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniele Palmas authored
This patch adds support for qmap mux protocol available in recent Qualcomm based modems. The qmap mux protocol can be used for multiplexing data packets in order to have multiple ip streams through the same physical device. Two new sysfs files are added for adding/removing the qmap mux based interfaces (named qmimux): - /sys/class/net/<iface>/qmi/add_mux - /sys/class/net/<iface>/qmi/del_mux Main patch author is Bjørn Mork <bjorn@mork.no> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: Daniele Palmas <dnlplm@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Arkadi Sharshevsky authored
Currently the return allocated index and err value are multiplexed. This patch changes the API to decouple the ret value from the allocated index. Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 25 Mar, 2017 5 commits
-
-
David S. Miller authored
Alexander Duyck says: ==================== Add busy poll support for epoll This patch set adds support for using busy polling with epoll. The main idea behind this is that we record the NAPI ID for the last event that is moved onto the ready list for the epoll context and then when we no longer have any events on the ready list we begin polling with that ID. If the busy polling does not yield any events then we will reset the NAPI ID to 0 and wait until a new event is added to the ready list with a valid NAPI ID before we will resume busy polling. Most of the changes in this set authored by me are meant to be cleanup or fixes for various things. For example, I am trying to make it so that we don't perform hash look-ups for the NAPI instance when we are only working with sender_cpu and the like. At the heart of this set is the last 3 patches which enable epoll support and add support for obtaining the NAPI ID of a given socket. With these it becomes possible for an application to make use of epoll and get optimal busy poll utilization by stacking multiple sockets with the same NAPI ID on the same epoll context. v1: The first version of this series only allowed epoll to busy poll if all of the sockets with a NAPI ID shared the same NAPI ID. I feel we were too strict with this requirement, so I changed the behavior for v2. v2: The second version was pretty much a full rewrite of the first set. The main changes consisted of pulling apart several patches to better address the need to clean up a few items and to make the code easier to review. In the set however I went a bit overboard and was trying to fix an issue that would only occur with 500+ years of uptime, and in the process limited the range for busy_poll/busy_read unnecessarily. v3: Split off the code for limiting busy_poll and busy_read into a separate patch for net. Updated patch that changed busy loop time tracking so that it uses "local_clock() >> 10" as we originally did. Tweaked "Change return type.." patch by moving declaration of "work" inside the loop where is was accessed and always reset to 0. Added "Acked-by" for patches that received acks. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sridhar Samudrala authored
This socket option returns the NAPI ID associated with the queue on which the last frame is received. This information can be used by the apps to split the incoming flows among the threads based on the Rx queue on which they are received. If the NAPI ID actually represents a sender_cpu then the value is ignored and 0 is returned. Signed-off-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sridhar Samudrala authored
This patch adds busy poll support to epoll. The implementation is meant to be opportunistic in that it will take the NAPI ID from the last socket that is added to the ready list that contains a valid NAPI ID and it will use that for busy polling until the ready list goes empty. Once the ready list goes empty the NAPI ID is reset and busy polling is disabled until a new socket is added to the ready list. In addition when we insert a new socket into the epoll we record the NAPI ID and assume we are going to receive events on it. If that doesn't occur it will be evicted as the active NAPI ID and we will resume normal behavior. An application can use SO_INCOMING_CPU or SO_REUSEPORT_ATTACH_C/EBPF socket options to spread the incoming connections to specific worker threads based on the incoming queue. This enables epoll for each worker thread to have only sockets that receive packets from a single queue. So when an application calls epoll_wait() and there are no events available to report, busy polling is done on the associated queue to pull the packets. Signed-off-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sridhar Samudrala authored
Move the core functionality in sk_busy_loop() to napi_busy_loop() and make it independent of sk. This enables re-using this function in epoll busy loop implementation. Signed-off-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
This patch flips the logic we were using to determine if the busy polling has timed out. The main motivation for this is that we will need to support two different possible timeout values in the future and by recording the start time rather than when we would want to end we can focus on making the end_time specific to the task be it epoll or socket based polling. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-