- 18 May, 2018 40 commits
-
-
Jose Abreu authored
Some HW specific setups, like sun8i, do not populate all the necessary callbacks, which is what HWIF helpers were expecting. Fix this by always trying to get the generic helpers and populate them if they were not previously populated by HW specific setup. Signed-off-by: Jose Abreu <joabreu@synopsys.com> Fixes: 5f0456b4 ("net: stmmac: Implement logic to automatically select HW Interface") Reported-by: Corentin Labbe <clabbe.montjoie@gmail.com> Tested-by: Corentin Labbe <clabbe.montjoie@gmail.com> Cc: Corentin Labbe <clabbe.montjoie@gmail.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rahul Lakkireddy authored
For T6, collect info on queue mapping to corresponding PF/VF in SGE. Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com> Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Antoine Tenart authored
This patch on the Marvell PPv2 driver is only cosmetic. Two typos are removed as well as other cosmetic fixes, such as extra new lines or tabs vs spaces. Suggested-by: Stefan Chulski <stefanc@marvell.com> Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
Trivial fix to spelling mistake in printk message text Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
kbuild test robot authored
Fixes: 20b654df ("tcp: support DUPACK threshold in RACK") Signed-off-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ursula Braun says: ==================== net/smc: cleanups 2018-05-18 here are SMC patches for net-next providing restructuring and cleanup in different areas. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hans Wippel authored
This patch splits up the functions smc_connect_rdma and smc_listen_work into smaller functions. Signed-off-by: Hans Wippel <hwippel@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hans Wippel authored
This patch changes the function smc_buf_free to use the SMC link group instead of the link as function parameter. Also, it changes the order of the other two parameters. Signed-off-by: Hans Wippel <hwippel@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hans Wippel authored
This patch consists of Christmas tree fixes and removal of an unneeded function parameter. Signed-off-by: Hans Wippel <hwippel@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hans Wippel authored
This patch moves a CDC sanity check from smc_cdc_msg_recv_action() to the other sanity checks in smc_cdc_rx_handler(). While doing this, it simplifies smc_cdc_msg_recv() and removes unneeded function parameters. Signed-off-by: Hans Wippel <hwippel@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hans Wippel authored
SMC connection and buffer handling belong to smc_core. So, this patch moves this code from smc.h to smc_core. Signed-off-by: Hans Wippel <hwippel@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hans Wippel authored
Currently, the write offset within the RMB is calculated on each write operation although it is fixed for each connection. With this patch, the offset is calculated once and stored in a connection specific variable. Signed-off-by: Hans Wippel <hwippel@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hans Wippel authored
The connection index is actually a RMBE index. So, this patch changes the name accordingly. Signed-off-by: Hans Wippel <hwippel@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hans Wippel authored
This patch moves the global link group list to smc_core where the link group functions are. To make this work, it moves code in af_smc and smc_ib that operates on the link group list to smc_core as well. While at it, the link group counter is integrated into the list structure and initialized to zero. Signed-off-by: Hans Wippel <hwippel@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hans Wippel authored
In addition to the buffer references, SMC currently stores the sizes of the receive and send buffers in each connection as separate variables. This patch introduces a buffer length variable in the common buffer descriptor and uses this length instead. Signed-off-by: Hans Wippel <hwippel@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linuxDavid S. Miller authored
Saeed Mahameed says: ==================== mlx5e-updates-2018-05-17 From: Or Gerlitz <ogerlitz@mellanox.com> This series addresses a regression introduced by the shared block TC changes [1]. Currently, for VF->VF and uplink->VF rules, the TC core (cls_api) attempts to offload the same flow multiple times into the driver, as a side effect of the mlx5 registration to the egdev callback. We use the flow cookie to ignore attempts to add such flows, we can't reject them (return error), b/c this will fail the offload attempt, so we ignore that. The last patch of the series deals with exposing HW stats counters through ethtool for the vport reps. Dave - the regression that we are addressing was introduced in 4.15 [1] and applies to nfp and mlx5. Jiri suggested to push driver side fixes to net-next, this is already done for nfp [2][3]. Once this is upstream, we will submit a small/point single patch fix for the TC core code which can serve for net and stable, but not carried into net-next, b/c it might limit some future use-cases. [1] 208c0f4b "net: sched: use tc_setup_cb_call to call per-block callbacks" [2] c50647d3 "nfp: flower: ignore duplicate cb requests for same rule" [3] 54a4a034 "nfp: flower: support offloading multiple rules with same cookie" ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linuxDavid S. Miller authored
Saeed Mahameed says: ==================== mlx5-updates-2018-05-17 mlx5 core dirver updates for both net-next and rdma-next branches. From Christophe JAILLET, first three patche to use kvfree where needed. From: Or Gerlitz <ogerlitz@mellanox.com> Next six patches from Roi and Co adds support for merged sriov e-switch which comes to serve cases where both PFs, VFs set on them and both uplinks are to be used in single v-switch SW model. When merged e-switch is supported, the per-port e-switch is logically merged into one e-switch that spans both physical ports and all the VFs. This model allows to offload TC eswitch rules between VFs belonging to different PFs (and hence have different eswitch affinity), it also sets the some of the foundations needed for uplink LAG support. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Eric Dumazet says: ==================== tcp: implement SACK compression When TCP receives an out-of-order packet, it immediately sends a SACK packet, generating network load but also forcing the receiver to send 1-MSS pathological packets, increasing its RTX queue length/depth, and thus processing time. Wifi networks suffer from this aggressive behavior, but generally speaking, all these SACK packets add fuel to the fire when networks are under congestion. This patch series adds SACK compression, but the infrastructure could be leveraged to also compress ACK in the future. v2: Addressed Neal feedback. Added two sysctls to allow fine tuning, or even disabling the feature. v3: take rtt = min(srtt, rcv_rtt) as Yuchung suggested, because rcv_rtt can be over estimated for RPC (or sender limited) ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
This per netns sysctl allows for TCP SACK compression fine-tuning. This limits number of SACK that can be compressed. Using 0 disables SACK compression. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
This per netns sysctl allows for TCP SACK compression fine-tuning. Its default value is 1,000,000, or 1 ms to meet TSO autosizing period. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
This counter tracks number of ACK packets that the host has not sent, thanks to ACK compression. Sample output : $ nstat -n;sleep 1;nstat|egrep "IpInReceives|IpOutRequests|TcpInSegs|TcpOutSegs|TcpExtTCPAckCompressed" IpInReceives 123250 0.0 IpOutRequests 3684 0.0 TcpInSegs 123251 0.0 TcpOutSegs 3684 0.0 TcpExtTCPAckCompressed 119252 0.0 Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
When TCP receives an out-of-order packet, it immediately sends a SACK packet, generating network load but also forcing the receiver to send 1-MSS pathological packets, increasing its RTX queue length/depth, and thus processing time. Wifi networks suffer from this aggressive behavior, but generally speaking, all these SACK packets add fuel to the fire when networks are under congestion. This patch adds a high resolution timer and tp->compressed_ack counter. Instead of sending a SACK, we program this timer with a small delay, based on RTT and capped to 1 ms : delay = min ( 5 % of RTT, 1 ms) If subsequent SACKs need to be sent while the timer has not yet expired, we simply increment tp->compressed_ack. When timer expires, a SACK is sent with the latest information. Whenever an ACK is sent (if data is sent, or if in-order data is received) timer is canceled. Note that tcp_sack_new_ofo_skb() is able to force a SACK to be sent if the sack blocks need to be shuffled, even if the timer has not expired. A new SNMP counter is added in the following patch. Two other patches add sysctls to allow changing the 1,000,000 and 44 values that this commit hard-coded. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Acked-by: Yuchung Cheng <ycheng@google.com> Acked-by: Toke Høiland-Jørgensen <toke@toke.dk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
As explained in commit 9f9843a7 ("tcp: properly handle stretch acks in slow start"), TCP stacks have to consider how many packets are acknowledged in one single ACK, because of GRO, but also because of ACK compression or losses. We plan to add SACK compression in the following patch, we must therefore not call tcp_enter_quickack_mode() Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Socket can not disappear under us. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexandre Belloni authored
ocelot_qsys.h is missing the SPDX identfier, fix that. Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com> Reviewed-by: Allan W. Nielsen <allan.nielsen@microsemi.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Jose Abreu says: ==================== net: stmmac: Clean-up and tune-up This targets to uniformize the handling of the different GMAC versions in stmmac_main.c file and also tune-up the HW. Currently there are some if/else conditions in the main source file which calls different callbacks depending on the ID of GMAC. With the introducion of a generic HW interface handling which automatically selects the GMAC callbacks to be used, it is now unpleasant to see if conditions in the main code because this should be completely agnostic of the GMAC version. This series removes most of these conditions. There are some if conditions that remain untouched but the callbacks handling are now uniformized. Tested in GMAC5, hope I didn't break any previous versions. Please check [1] for performance analisys of patches 3-12. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jose Abreu authored
We can remove the if condition and check if return code is different than -EINVAL, meaning callback is present. Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jose Abreu authored
Stop using if conditions depending on the GMAC version for getting the descriptor skbuff address and use instead a helper implemented in the descriptor files. Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jose Abreu authored
Currently an if condition is used to select the correct callback to set rx_onwer in descriptor. Lets keep this simple and always use the same callback. Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jose Abreu authored
We either have .enable_dma_transmission or .set_tx_tail_ptr in the HW table callbacks, we can never have both so there is no need to check for GMAC version. Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jose Abreu authored
Instead of relying on the GMAC version for choosing if we need to use dma_init or dma_init_{rx/tx}_chan callback, lets uniformize this and always use the dma_init_{rx/tx}_chan callbacks. While at it, fix the use of dma_init_chan callback, which shall be called for as many channels as the max of rx/tx channels. Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jose Abreu authored
PTP and MMC modules base address can depend on the GMAC version. As this is HW specific lets move this base address calculation to hwif.c. Also, add an entry in the HW table so that we can specify the module offset. This can later be extended to more modules, if deemed necessary. Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jose Abreu authored
With the introducion of callbacks check in hwif.h we only call the callback if HW supports it so there is no longer need to check for GMAC version. Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jose Abreu authored
Instead of relying on the GMAC version for choosing if we need to use dma_{rx/tx}_mode or just dma_mode callback lets uniformize this and always use the dma_{rx/tx}_mode callbacks. Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jose Abreu authored
Stop using if conditions depending on the GMAC version for clearing the descriptor and use instead a helper implemented in the descriptor files. Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jose Abreu authored
Stop using if conditions depending on the GMAC version for setting the the descriptor skbuff address and use instead a helper implemented in the descriptor files. Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jose Abreu authored
This is cutting down performance. Once the timer is armed it should run after the time expires for the first packet sent and not the last one. After this change, running iperf, the performance gain is +/- 24%. Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jose Abreu authored
This enables OSP (Operate on Second Packet) for GMAC4. The feature allows DMA to fetch second descriptor while its still processing the first one. Running iperf, the performance gain is +/- 38%. Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Or Gerlitz authored
Currently the representor only report the SW (slow-path) traffic counters. Add packet/bytes reporting of the HW counters, which account for the total amount of traffic that was handled by the vport, both slow and fast (offloaded) paths. The newly exposed counters are named vport_rx/tx_packets/bytes. Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Adi Nissim <adin@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Or Gerlitz authored
For VF->VF and uplink->VF rules, the TC core (cls_api) attempts to offload the same flow multiple times into the driver, b/c we registered to the egdev callback. Use the flow cookie to ignore attempts to add such flows, we can't reject them (return error), b/c this will fail the offload attempt, so we ignore that. We indentify wrong stat/del calls using the flow ingress/egress flags, here we do return error to the core. Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Reviewed-by: Paul Blakey <paulb@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-