- 22 Feb, 2018 4 commits
-
-
Nathan Fontenot authored
The approach of one counter to rule them all when tracking the number of active sub-crqs, pools, and napi has problems handling some failover scenarios. This is due to the split in initializing the sub crqs, pools and napi in different places and the placement of updating the active counts. This patch simplifies this by having a counter for tx and rx sub-crqs, pools, and napi. Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Lunn authored
MV88E6352 and later switches support GPIO control through the "Scratch & Misc" global2 register. Two of the pins controlled this way on the mv88e6390 family are the external MDIO pins. They can either by used as part of the MII interface for port 0, GPIOs, or MDIO. Add a function to configure them for MDIO, if possible, and call it when registering the external MDIO bus. Suggested-by: Russell King <rmk@armlinux.org.uk> Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Falcon authored
With the recent change, transmissions that only needed one descriptor were being missed. The result is that such packets were tracked as outstanding transmissions but never removed when its completion notification was received. Fixes: ffc385b9 ("ibmvnic: Keep track of supplementary TX descriptors") Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linuxDavid S. Miller authored
Saeed Mahameed says: ==================== mlx5-updates-2018-02-21 This series includes shared code updates for mlx5 core driver for both netdev and rdma subsystems. By Saeed, First six patches of the series are meant to address a performance issue and should provide a performance boost for multi core IRQ interrupt hungry workloads. The issue is fixed in the first patch, all other patches are meant to refactor the code in light of this fix. The problem it comes to fix, is a shared spinlock accessed across all HCA IRQs which protects the CQ database. To solve this we simply move the CQ database and its spinlock to be per EQ (IRQ), thus per core. By Yonatan, Fragmented completion queue (CQ) for RDMA, core driver implementation to create fragmented CQ buffers rather than one large contiguous memory buffer, the implementation scheme already exist and used by the netdev CQs, the patch shares that code with the rdma CQ creation flow and makes use of the new API in mlx5_ib driver. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 21 Feb, 2018 28 commits
-
-
David S. Miller authored
Matteo Croce says: ==================== Remove IPVlan module dependencies on IPv6 and L3 Master dev The IPVlan module currently depends on IPv6 and L3 Master dev. Refactor the code to allow building IPVlan module regardless of the value of CONFIG_IPV6 as done in other drivers like VxLAN or GENEVE. Also change the CONFIG_NET_L3_MASTER_DEV dependency into a select, since compiling L3 Master device alone has little sense. $ grep -wE 'CONFIG_(IPV6|IPVLAN)' .config CONFIG_IPV6=y CONFIG_IPVLAN=m $ ll drivers/net/ipvlan/ipvlan.ko 48K drivers/net/ipvlan/ipvlan.ko $ grep -wE 'CONFIG_(IPV6|IPVLAN)' .config CONFIG_IPVLAN=m $ ll drivers/net/ipvlan/ipvlan.ko 44K drivers/net/ipvlan/ipvlan.ko ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matteo Croce authored
The L3 Master device is just a glue between the core networking code and device drivers, so it should be selected automatically rather than requiring to be enabled explicitly. Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matteo Croce authored
IPVlan has an hard dependency on IPv6, refactor the ipvlan code to allow compiling it with IPv6 disabled, move duplicate code into addr_equal() and refactor series of if-else into a switch. Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Donald Sharp authored
Allow a rule that is being added/deleted/modified or dumped to contain the originating protocol's id. The protocol is handled just like a routes originating protocol is. This is especially useful because there is starting to be a plethora of different user space programs adding rules. Allow the vrf device to specify that the kernel is the originator of the rule created for this device. Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nathan Fontenot authored
When a failure occurs during initialization of the tx sub crq irqs, we should branch to the cleanup of the tx irqs. The current code branches to the rx irq cleanup and attempts to cleanup the rx irqs which have not been initialized. Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yafang Shao authored
TCPF_ macro depends on the definition of TCP_ macro. So it is better to define them with TCP_ marco. Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Eric Dumazet says: ==================== tcp: remove non GSO code Switching TCP to GSO mode, relying on core networking layers to perform eventual adaptation for dumb devices was overdue. 1) Most TCP developments are done with TSO in mind. 2) Less high-resolution timers needs to be armed for TCP-pacing 3) GSO can benefit of xmit_more hint 4) Receiver GRO is more effective (as if TSO was used for real on sender) -> less ACK packets and overhead. 5) Write queues have less overhead (one skb holds about 64KB of payload) 6) SACK coalescing just works. (no payload in skb->head) 7) rtx rb-tree contains less packets, SACK is cheaper. 8) Removal of legacy code. Less maintenance hassles. Note that I have left the sendpage/zerocopy paths, but they probably can benefit from the same strategy. Thanks to Oleksandr Natalenko for reporting a performance issue for BBR/fq_codel, which was the main reason I worked on this patch series. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Since all skbs in write/rtx queues have CHECKSUM_PARTIAL, we can remove dead code. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
We no longer have skbs with skb->ip_summed == CHECKSUM_NONE in TCP write queues. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
We no longer have skbs with skb->ip_summed == CHECKSUM_NONE in TCP write queues. We can remove dead code in tcp_sendmsg(). Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Since TCP relies on GSO, we do not need this helper anymore. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
After previous commit, sk_can_gso() is always true. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Oleksandr Natalenko reported performance issues with BBR without FQ packet scheduler that were root caused to lack of SG and GSO/TSO on his configuration. In this mode, TCP internal pacing has to setup a high resolution timer for each MSS sent. We could implement in TCP a strategy similar to the one adopted in commit fefa569a ("net_sched: sch_fq: account for schedule/timers drifts") or decide to finally switch TCP stack to a GSO only mode. This has many benefits : 1) Most TCP developments are done with TSO in mind. 2) Less high-resolution timers needs to be armed for TCP-pacing 3) GSO can benefit of xmit_more hint 4) Receiver GRO is more effective (as if TSO was used for real on sender) -> Lower ACK traffic 5) Write queues have less overhead (one skb holds about 64KB of payload) 6) SACK coalescing just works. 7) rtx rb-tree contains less packets, SACK is cheaper. This patch implements the minimum patch, but we can remove some legacy code as follow ups. Tested: On 40Gbit link, one netperf -t TCP_STREAM BBR+fq: sg on: 26 Gbits/sec sg off: 15.7 Gbits/sec (was 2.3 Gbit before patch) BBR+pfifo_fast: sg on: 24.2 Gbits/sec sg off: 14.9 Gbits/sec (was 0.66 Gbit before patch !!! ) BBR+fq_codel: sg on: 24.4 Gbits/sec sg off: 15 Gbits/sec (was 0.66 Gbit before patch !!! ) Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Oleksandr Natalenko <oleksandr@natalenko.name> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Nathan Fontenot says: ==================== ibmvnic: Make driver resources dynamic The ibmvnic driver needs to be able to handle the number of tx/rx sub-crqs changing during a reset of the driver. To do this several changes need to be made. First the num_active_[tx|rx]_pools counters need to be re-named to num_active_[tc|rx]_scrqs, and updated after resource initialization. With this change we can now release and init the sub crqs and napi (for rx sub crqs) when the number of sub crqs change. Lastly, the stats buffer allocation is updated to always allocate the maximum number of sub-crqs count of stats buffers. -Nathan --- Updates for V3: Patch 3/5 - Make do_h_free parameter a bool Updates for V2: Patch 3/5 - Use correct queue count when driver is in probed state for releasing sub crqs. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nathan Fontenot authored
To avoid losing any stats when the number of sub-crqs change, allocate the max number of stats buffers so a stats buffer exists all possible sub-crqs. Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nathan Fontenot authored
In order to handle the number of rx sub crqs changing during a driver reset, the ibmvnic driver also needs to update the number of napi. To do this the code to init and free napi's is moved to their own routines so they can be called during the reset process. Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nathan Fontenot authored
When the driver resets it is possible that the number of tx/rx sub-crqs can change. This patch handles this so that the driver does not try to access non-existent sub-crqs. The count for releasing sub crqs depends on the adapter state. The active queue count is not set in probe, so if we are relasing in probe state we use the request queue count. Additionally, a parameter is added to release_sub_crqs() so that we know if the h_call to free the sub-crq needs to be made. In the reset path we have to do a reset of the main crq, which is a free followed by a register of the main crq. The free of main crq results in all of the sub crq's being free'ed. When updating sub-crq count in the reset path we do not want to h_free the sub-crqs, they are already free'ed. Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nathan Fontenot authored
Inpreparation for using the active scrq count to track more active resources, move the setting of the active count to after initialization occurs in initial driver init and during driver reset. Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nathan Fontenot authored
Rename the tx/rx active pool variables to be tx/rx active scrq counts. The tx/rx pools are per sub-crq so this is a more appropriate name. This also is a preparatory step for using thiese variables for handling updates to sub-crqs and napi based on the active count. Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gustavo A. R. Silva authored
In preparation to enabling -Wimplicit-fallthrough, mark switch cases where we are expecting to fall through. Addresses-Coverity-ID: 1465362 ("Missing break in switch") Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Acked-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Finn Thain says: ==================== Fixes, cleanup and modernization for 8390 ethernet drivers Changes since v4 of combined patch series: - Removed redundant and non-portable MACH_IS_MAC tests. - Added acked-by tags from Geert Uytterhoeven. - Omitted patches unrelated to 8390 drivers. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Finn Thain authored
Use dev_foo() to log the slot number instead of the unexpanded "eth%d" format string. Disambiguate the two identical "Card type %s is unsupported" messages. Tested-by: Stan Johnson <userm57@yahoo.com> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Finn Thain authored
This resolves an old bug that constrained this driver to no more than one card. Tested-by: Stan Johnson <userm57@yahoo.com> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Finn Thain authored
The lib8390 module parameter 'msg_enable' doesn't do anything useful: it causes an ancient version string to be logged. Remove redundant code that logs the same string. In ne.c and wd.c, the value of ei_local->msg_enable is used before being assigned. Use ne_msg_enable and wd_msg_enable, respectively. Most of the other 8390 drivers never assign ei_local->msg_enable. Use the 'msg_enable' module parameter from lib8390 as the default value. Eliminate the pointless static and local variables. Clean up an indentation mistake. All of these issues originated from the same patch. Cc: Russell King <linux@armlinux.org.uk> Fixes: c45f812f ("8390 : Replace ei_debug with msg_enable/NETIF_MSG_* feature") Tested-by: Stan Johnson <userm57@yahoo.com> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Finn Thain authored
The hydra, zorro8390 and mcf8390 drivers all #include "lib8390.c" and have no need for 8390.o. modinfo confirms no dependency on 8390.ko. Drop the redundant dependency from the Makefile. objdump confirms that this patch has no effect on the module binaries. The superfluous additions of 8390.o were introduced in commit 644570b8 ("8390: Move the 8390 related drivers"). Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: Greg Ungerer <gerg@linux-m68k.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
rtl8169_init_phy() resets the PHY anyway after applying the chip-specific PHY configuration. So we don't need to soft-reset the PHY as part of the chip-specific configuration. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eyal Birger authored
The commit a new tc ematch for using netfilter xtable matches. This allows early classification as well as mirroning/redirecting traffic based on logic implemented in netfilter extensions. Current supported use case is classification based on the incoming IPSec state used during decpsulation using the 'policy' iptables extension (xt_policy). The module dynamically fetches the netfilter match module and calls it using a fake xt_action_param structure based on validated userspace provided parameters. As the xt_policy match does not access skb->data, no skb modifications are needed on match. Signed-off-by: Eyal Birger <eyal.birger@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Commit bde135a6 "r8169: only enable PCI wakeups when WOL is active" removed the only user of flag RTL_FEATURE_WOL. So let's remove some now dead code. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 20 Feb, 2018 8 commits
-
-
David S. Miller authored
Niklas Cassel says: ==================== stmmac multi-queue fixes and cleanups ==================== Reviewed-by: Jose Abreu <joabreu@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Niklas Cassel authored
Honor error code from stmmac_dt_phy() instead of always returning -ENODEV. No functional change intended. Signed-off-by: Niklas Cassel <niklas.cassel@axis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Niklas Cassel authored
The device tree binding for stmmac says: - Multiple TX Queues parameters: below the list of all the parameters to configure the multiple TX queues: - snps,tx-queues-to-use: number of TX queues to be used in the driver [...] - For each TX queue [...] However, if one specifies snps,tx-queues-to-use = 2, but omits the queue subnodes, or defines just one queue subnode, since the driver appears to initialize queues with sane default values, we will get tx queue timeouts. This is because the initialization code only initializes as many queues as it finds subnodes. Potentially leaving some queues uninitialized. To avoid hard to debug issues, return an error if the number of subnodes differ from snps,tx-queues-to-use/snps,rx-queues-to-use. Signed-off-by: Niklas Cassel <niklas.cassel@axis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Niklas Cassel authored
stmmac_mac_config_rx_queues_routing() incorrectly calls rx_queue_prio() instead of rx_queue_routing(). This looks like a copy paste issue, since stmmac_mac_config_rx_queues_prio() already calls rx_queue_prio(), and both stmmac_mac_config_rx_queues_routing() and stmmac_mac_config_rx_queues_prio() are very similar in structure. Fixes: abe80fdc ("net: stmmac: RX queue routing configuration") Signed-off-by: Niklas Cassel <niklas.cassel@axis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Niklas Cassel authored
Looking at dwmac4_tx_queue_routing(), it is obvious that it sets up rx queue routing. Rename dwmac4_tx_queue_routing() to dwmac4_rx_queue_routing() to better match reality. Fixes: abe80fdc ("net: stmmac: RX queue routing configuration") Signed-off-by: Niklas Cassel <niklas.cassel@axis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Niklas Cassel authored
The current code assumes that a tx_skbuff entry has been cleared by stmmac_tx_clean() before stmmac_xmit()/stmmac_tso_xmit() assigns a new skb to that entry. However, since we never check the current value before overwriting it, it is theoretically possible that a non-NULL value is overwritten. Add WARN_ONs to verify that each entry in tx_skbuff is NULL before it is assigned a new value. Signed-off-by: Niklas Cassel <niklas.cassel@axis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Niklas Cassel authored
tx_skbuff is initialized to NULL in init_dma_tx_desc_rings(), which is called from ndo_open(). stmmac_tx_clean() frees any non-NULL skb, and sets the tx_skbuff entry to NULL. Hence, there is no need to set skbuff entries to NULL in stmmac_xmit()/stmmac_tso_xmit(), and doing so falsely gives the reader the impression that it is needed. Do not clear tx_skbuff entries in stmmac_xmit()/stmmac_tso_xmit(). Signed-off-by: Niklas Cassel <niklas.cassel@axis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Niklas Cassel authored
The DMA engine in dwmac4 can segment a large TSO packet to several smaller packets of (max) size Maximum Segment Size (MSS). The DMA engine fetches and saves the MSS via a context descriptor. This context decriptor has to be provided to each tx DMA channel. To ensure that this is done, move struct member mss from stmmac_priv to stmmac_tx_queue. stmmac_reset_queues_param() now also resets mss, together with other queue parameters, so reset of mss value can be removed from stmmac_resume(). init_dma_tx_desc_rings() now also resets mss, together with other queue parameters, so reset of mss value can be removed from stmmac_open(). This fixes tx queue timeouts for dwmac4, with DT property snps,tx-queues-to-use > 1, when running iperf3 with multiple threads. Fixes: ce736788 ("net: stmmac: adding multiple buffers for TX") Signed-off-by: Niklas Cassel <niklas.cassel@axis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-