- 06 Apr, 2017 9 commits
-
-
Ngai-Mint Kwan authored
Interfaces will reset whenever the TX mailbox FIFO has become full. This occurs more frequently whenever the IES API application is not running to process and clear the messages in the FIFO. Thus, this could lead to situations where the interface would enter an infinite reset loop. That is: if the interface is trying to synchronize a huge number of unicast and multicast entries with the IES API application, the TX mailbox FIFO will become full and the interface resets. Once the interface exits reset, it'll try to synchronize the unicast and multicast entries again. Ergo, this creates an infinite loop. Other actions such as multiple mulitcast mode or up/down transitions will fill the TX mailbox FIFO and induce the interface to reset. To correct these situations, check if the interface's "host_ready" flag is enabled before enqueuing any messages to the TX mailbox FIFO. This check will be conducted by a function call. Lastly, this issue mainly affects the PF and, thus, the VF is exempt. Signed-off-by: Ngai-Mint Kwan <ngai-mint.kwan@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Ngai-Mint Kwan authored
Write to RXQCTL register to disable the receive queue when configuring the RX ring. Signed-off-by: Ngai-Mint Kwan <ngai-mint.kwan@intel.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jacob Keller authored
Re-word the comment to avoid stating that we return a value for this void function. Additionally, there is no need to mention older kernels, since this is the upstream kernel. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jacob Keller authored
If some code path executes fm10k_service_event_schedule(), it is guaranteed that we only queue the service task once, since we use __FM10K_SERVICE_SCHED flag. Unfortunately this has a side effect that if a service request occurs while we are currently running the watchdog, it is possible that we will fail to notice the request and ignore it until the next time the request occurs. This can cause problems with pf/vf mailbox communication and other service event tasks. To avoid this, introduce a FM10K_SERVICE_REQUEST bit. When we successfully schedule (and set the _SCHED bit) the service task, we will clear this bit. However, if we are unable to currently schedule the service event, we just set the new SERVICE_REQUEST bit. Finally, after the service event completes, we will re-schedule if the request bit has been set. This should ensure that we do not miss any service event schedules, since we will re-schedule it once the currently running task finishes. This means that for each request, we will always schedule the service task to run at least once in full after the request came in. This will avoid timing issues that can occur with the service event scheduling. We do pay a cost in re-running many tasks, but all the service event tasks use either flags to avoid duplicate work, or are tolerant of being run multiple times. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jacob Keller authored
This ensures that future programmers do not have to remember to re-size the bitmaps due to adding new values. Although this is unlikely for this driver, it may happen and it's best to prevent it from ever being an issue. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jacob Keller authored
Replace bitwise operators and #defines with a BITMAP and enumeration values. This is similar to how we handle the "state" values as well. This has two distinct advantages over the old method. First, we ensure correctness of operations which are currently problematic due to race conditions. Suppose that two kernel threads are running, such as the watchdog and an ethtool ioctl, and both modify flags. We'll say that the watchdog is CPU A, and the ethtool ioctl is CPU B. CPU A sets FLAG_1, which can be seen as CPU A read FLAGS CPU A write FLAGS | FLAG_1 CPU B sets FLAG_2, which can be seen as CPU B read FLAGS CPU A write FLAGS | FLAG_2 However, "|=" and "&=" operators are not actually atomic. So this could be ordered like the following: CPU A read FLAGS -> variable CPU B read FLAGS -> variable CPU A write FLAGS (variable | FLAG_1) CPU B write FLAGS (variable | FLAG_2) Notice how the 2nd write from CPU B could actually undo the write from CPU A because it isn't guaranteed that the |= operation is atomic. In practice the race windows for most flag writes is incredibly narrow so it is not easy to isolate issues. However, the more flags we have, the more likely they will cause problems. Additionally, if such a problem were to arise, it would be incredibly difficult to track down. Second, there is an additional advantage beyond code correctness. We can now automatically size the BITMAP if more flags were added, so that we do not need to remember that flags is u32 and thus if we added too many flags we would over-run the variable. This is not a likely occurrence for fm10k driver, but this patch can serve as an example for other drivers which have many more flags. This particular change does have a bit of trouble converting some of the idioms previously used with the #defines for flags. Specifically, when converting FM10K_FLAG_RSS_FIELD_IPV[46]_UDP flags. This whole operation was actually quite problematic, because we actually stored flags separately. This could more easily show the problem of the above re-ordering issue. This is really difficult to test whether atomics make a difference in practical scenarios, but you can ensure that basic functionality remains the same. This patch has a lot of code coverage, but most of it is relatively simple. While we are modifying these files, update their copyright year. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Phil Turnbull authored
FM10K_REMOVED expects a hardware address, not a 'struct fm10k_hw'. Fixes: 5cb8db4a ("fm10k: Add support for VF") Signed-off-by: Phil Turnbull <phil.turnbull@oracle.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jarod Wilson authored
People are using bonding over Infiniband IPoIB connections, and who knows what else. Infiniband has a hardware address length of 20 octets (INFINIBAND_ALEN), and the network core defines a MAX_ADDR_LEN of 32. Various places in the bonding code are currently hard-wired to 6 octets (ETH_ALEN), such as the 3ad code, which I've left untouched here. Besides, only alb is currently possible on Infiniband links right now anyway, due to commit 1533e773, so the alb code is where most of the changes are. One major component of this change is the addition of a bond_hw_addr_copy function that takes a length argument, instead of using ether_addr_copy everywhere that hardware addresses need to be copied about. The other major component of this change is converting the bonding code from using struct sockaddr for address storage to struct sockaddr_storage, as the former has an address storage space of only 14, while the latter is 128 minus a few, which is necessary to support bonding over device with up to MAX_ADDR_LEN octet hardware addresses. Additionally, this probably fixes up some memory corruption issues with the current code, where it's possible to write an infiniband hardware address into a sockaddr declared on the stack. Lightly tested on a dual mlx4 IPoIB setup, which properly shows a 20-octet hardware address now: $ cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) (fail_over_mac active) Primary Slave: mlx4_ib0 (primary_reselect always) Currently Active Slave: mlx4_ib0 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 100 Down Delay (ms): 100 Slave Interface: mlx4_ib0 MII Status: up Speed: Unknown Duplex: Unknown Link Failure Count: 0 Permanent HW addr: 80:00:02:08:fe:80:00:00:00:00:00:00:e4:1d:2d:03:00:1d:67:01 Slave queue ID: 0 Slave Interface: mlx4_ib1 MII Status: up Speed: Unknown Duplex: Unknown Link Failure Count: 0 Permanent HW addr: 80:00:02:09:fe:80:00:00:00:00:00:01:e4:1d:2d:03:00:1d:67:02 Slave queue ID: 0 Also tested with a standard 1Gbps NIC bonding setup (with a mix of e1000 and e1000e cards), running LNST's bonding tests. CC: Jay Vosburgh <j.vosburgh@gmail.com> CC: Veaceslav Falico <vfalico@gmail.com> CC: Andy Gospodarek <andy@greyhouse.net> CC: netdev@vger.kernel.org Signed-off-by: Jarod Wilson <jarod@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
If the mc_list is longer than 256 addresses, we enter mc_promisc mode. If we're in mc_promisc mode and the firmware doesn't support cascaded multicast, normally we also insert our mc_list, to prevent stealing by another VI. However, if the mc_list was too long, this isn't really helpful - the MC groups that didn't fit in the list can still get stolen, and having only some of them stealable will probably cause more confusing behaviour than having them all stealable. Since inserting 256 multicast filters takes a long time and can lead to MCDI state machine timeouts, just skip the mc_list insert in this overflow condition. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 05 Apr, 2017 31 commits
-
-
David S. Miller authored
Jakub Kicinski says: ==================== nfp: ethtool link settings This series adds support for getting and setting link settings via the (moderately) new ethtool ksettings ops. First patch introduces minimal speed and duplex reporting using the information directly provided in PCI BAR0 memory. Next few changes deal with the need to refresh port state read from the service process and patch 6 finally uses that information to provide link speed and duplex. Patches 7 and 8 add auto negotiation and port type reporting. Remaining changes provide the set support for speed and auto negotiation. An upcoming series will also add port splitting support via devlink. Quite a bit of churn in this series is caused by the fact that currently port speed and split changes will usually require a reboot to take effect. Current service process code is not capable of performing MAC reinitialization after chip has been passing traffic. To make sure user is aware of this limitation we refuse the configuration unless netdev is down, print warning to the logs and if configuration was performed but did take effect we unregister the netdev. Service process has a "reboot needed" sticky bit, so reloading the driver will not bring the netdev back. Note that there is a helper in patch 13 which is marked as __always_inline, because the FIELD_* macros require the parameters to be known at compilation time. I hope that is OK. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Support setting link speed and autonegotiation through set_link_ksettings() ethtool op. If the port is reconfigured in incompatible way and reboot is required the netdev will get unregistered and not come back until user reboots the system. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Add NSP backend for upcoming link configuration operations. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Allow NSP to set option code even when error is reported. This provides a way for NSP to give user more precise information about why command failed. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Make NSP port structure a union to simplify accessing the fields from generic macros. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
NSP commands may be slow to respond, we should try to avoid doing a command-per-item when user requested to change multiple parameters for instance with an ethtool .set_settings() command. Introduce a way of internal NSP code to carry state in NSP structure and add start/finish calls to perform the initialization and kick off of the configuration request, with potentially many parameters being modified in between. nfp_eth_set_mod_enable() will make use of the new code internally, other "set" functions to follow. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
We will soon add more NSP commands and structure definitions. Move all high-level NSP header contents to a common nfp_nsp.h file. Right now it mostly boils down to renaming nfp_nsp_eth.h and moving some functions from nfp.h there. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Service process firmware provides us with information about media and interface (SFP module) plugged in, translate that to Linux's PORT_* defines and report via ethtool. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
NSP ABI version 0.17 is exposing the autonegotiation settings. Report whether autoneg is on via ethtool. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
On the PF prefer the link speed value provided by the NSP. Refresh port table if needed. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
We will need a way of refreshing port state for link settings get/set. For get we need to refresh port speed and type. When settings are changed the reconfiguration may require reboot before it's effective. Unregister netdevs affected by reconfiguration from a workqueue. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
For caching link settings - remember if we have seen link events since the last time the eth_port information was refreshed. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
We will want to unregister netdevs after their port got reconfigured. For that we need to make sure manipulations of port list from the port reconfiguration flow will not race with driver's .remove() callback. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
After port reconfiguration (port split, media type change) firmware will continue to report old configuration until reboot. NSP will inform us that reconfiguration is pending. To avoid user confusion refuse to spawn netdevs until the new configuration is applied (reboot). We need to split the netdev to eth_table port matching from MAC search and move it earlier in the probe() flow. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Read link speed from the BAR. This provides very basic information and works for both PFs and VFs. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Merge tag 'linux-can-next-for-4.13-20170404' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next Marc Kleine-Budde says: ==================== pull-request: can-next 2017-03-03 this is a pull request of 5 patches for net-next/master. There are two patches by Yegor Yefremov which convert the ti_hecc driver into a DT only driver, as there is no in-tree user of the old platform driver interface anymore. The next patch by Mario Kicherer adds network namespace support to the can subsystem. The last two patches by Akshay Bhat add support for the holt_hi311x SPI CAN driver. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
LABBE Corentin authored
This patch add a generic testsuite for testing ethernet network device driver. Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com> Tested-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Vladislav Yasevich says: ==================== rtnetlink: Updates to rtnetlink_event() This series came out of the conversation that started as a result my first attempt to add netdevice event info to netlink messages. This series converts event processing to a 'white list', where we explicitely permit events to generate netlink messages. This is meant to make people take a closer look and determine wheter these events should really trigger netlink messages. I am also adding a V2 of my patch to add event type to the netlink message. This version supports all events that we currently generate. I will also update my patch to iproute that will show this data through 'ip monitor'. I actually need the ability to trap NETDEV_NOTIFY_PEERS event (as well as possible NETDEV_RESEND_IGMP) to support hanlding of macvtap on top of bonding. I hope others will also find this info usefull. V2: Added missed events (from David Ahern) ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Yasevich authored
When netdev events happen, a rtnetlink_event() handler will send messages for every event in it's white list. These messages contain current information about a particular device, but they do not include the iformation about which event just happened. The consumer of the message has to try to infer this information. In some cases (ex: NETDEV_NOTIFY_PEERS), that is not possible. This patch adds a new extension to RTM_NEWLINK message called IFLA_EVENT that would have an encoding of the which event triggered this message. This would allow the the message consumer to easily determine if it is interested in a particular event or not. Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Yasevich authored
The rtnetlink_event currently functions as a blacklist where we block cerntain netdev events from being sent to user space. As a result, events have been added to the system that userspace probably doesn't care about. This patch converts the implementation to the white list so that newly events would have to be specifically added to the list to be sent to userspace. This would force new event implementers to consider whether a given event is usefull to user space or if it's just a kernel event. Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gao Feng authored
Define one new macro TCP_MAX_WSCALE instead of literal number '14', and use U16_MAX instead of 65535 as the max value of TCP window. There is another minor change, use rounddown(space, mss) instead of (space / mss) * mss; Signed-off-by: Gao Feng <fgao@ikuai8.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Biggers authored
Since commit d6580a9f ("kexec: sysrq: simplify sysrq-c handler"), the sysrq handler for the 'c' key has been sysrq_crash_op. Debugging code in the ibm_emac driver also tries to register a handler for the 'c' key, but this has no effect because register_sysrq_key() doesn't replace existing handlers. Since evidently no one has cared enough to fix this in the last 8 years, and it's very rare for drivers to register sysrq handlers (for good reason), just remove the dead code. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mahesh Bandewar authored
Earlier patch c4adfc82 ("bonding: make speed, duplex setting consistent with link state") made an attempt to keep slave state consistent with speed and duplex settings. Unfortunately link-state transition is used to change the active link especially when used in conjunction with mii-mon. The above mentioned patch broke that logic. Also when speed and duplex settings for a link are updated during a link-event, the link-status should not be changed to invoke correct transition logic. This patch fixes this issue by moving the link-state update outside of the bond_update_speed_duplex() fn and to the places where this fn is called and update link-state selectively. Fixes: c4adfc82 ("bonding: make speed, duplex setting consistent with link state") Signed-off-by: Mahesh Bandewar <maheshb@google.com> Reviewed-by: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrey Vagin authored
cb_running is reported in /proc/self/net/netlink and it is reported by the ss tool, when it gets information from the proc files. sock_diag is a new interface which is used instead of proc files, so it looks reasonable that this interface has to report no less information about sockets than proc files. We use these flags to dump and restore netlink sockets. Signed-off-by: Andrei Vagin <avagin@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
We should be returning -ENOMEM if qed_mcp_cmd_add_elem() fails. The current code returns success. Fixes: 4ed1eea8 ("qed: Revise MFW command locking") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Tomer Tayar <Tomer.Tayar@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
We accidentally left this dead code behind after commit 5952fde1 ("net: sched: choke: remove dead filter classify code"). Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
There is a cut and paste bug here so we accidentally clear the first few bytes of "resp" a second time instead clearing "ctx". Fixes: 50c0add5 ("liquidio: refactor interrupt moderation code") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Felix Manlunas <felix.manlunas@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joao Pinto authored
In hardware configurations where multiple queues are active, the rx queue needs to be mapped into a dma channel, even if a single rx queue is used. Signed-off-by: Joao Pinto <jpinto@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Add all the currently available SPEED_<foo> strings. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Michael Chan says: ==================== bnxt_en: Updates for net-next. Main changes are to add WoL and selftest features, optimize XDP_TX by using short BDs, and to cap the usage of MSIX. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Chan authored
The current code enables up to the maximum MSIX vectors in the PCIE config space without considering the max completion rings available. An MSIX vector is only useful when it has an associated completion ring, so it is better to cap it. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-