- 24 Feb, 2016 26 commits
-
-
Claudiu Manoil authored
Signed-off-by: Claudiu Manoil <claudiu.manoil@freescale.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Claudiu Manoil authored
Move the mapping of the head BD before the mapping of fragments. The TOE (h/w offload) decision logic block can be also moved up (as the TOE flag belongs to the head BD), resulting in more localized code (TOE logic vs BD mapping code blocks). Note that, for this h/w, the R (status) bit for the head BD of a S/G frame needs to be written last for a reliable transmission. For the fragmented skb case, a local variable is used to temporarily store the status info of the first BD, replacing a BD status read. A merge of 2 "if(do_tstamp)" blocks was also possible. Signed-off-by: Claudiu Manoil <claudiu.manoil@freescale.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rafał Miłecki authored
It needs very similar workarounds to the one on BCM4707. It was tested on D-Link DIR-885L home router. Signed-off-by: Rafał Miłecki <zajec5@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ajit Khaparde says: ==================== be2net patches Please consider applying to net-next ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
ajit.khaparde@broadcom.com authored
In QnQ configurations like Flex-10 where the VLANs are inserted by the ASIC, on rare occasions the HW is encountering a scenario where the final frame length ends to be greater than what the ASIC can support. This is because when the TXULP pulls the TX WRB to check the length of the frame to be transmitted it also adds the size of VLANs to be inserted by the HW to the length of the frame indicated in the WRB, which in some cases fails the range check. This causes a UE. Avoid this by trimming the skb length to accommodate the VLAN insertion. Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
ajit.khaparde@broadcom.com authored
When 16-bit integers are loaded on CPUs with high order native register sizes, the CPU could use some extra ops before using them. And currently some of the frequently used fields in the driver like the producer and consumer indices of the queues are declared as u16. This patch declares such fields as u32. With this change we see the 64-byte packets per second numbers improve by about 4%. Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Alexander Duyck says: ==================== Flow dissector fixes and improvements This patch series is meant to fix and/or improve a number of items within the flow dissector code. The main change out of all of this is that IPv4 and IPv6 fragmentation should now be handled better than it was. As a result we should see an improvement when handling things like IP fragment reassembly as the skbs should now only have header data in the linear portion of the buffer while the fragments will only hold payload data. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
We want to try and pull the L4 header in if it is available in the first fragment. As such add the flag to indicate we want to pull the headers on the first fragment in. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Acked-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
The IPv6 parsing was using a local pointer when it could use the same pointer as the IPv4 portion of the code since the key_addrs can support both IPv4 and IPv6 as it is just a pointer. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Acked-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
The flow dissector bits handling FCoE didn't bother to actually validate that the space there was enough for the FCoE header. So we need to update things so that if there is room we add the header and report a good result, otherwise we do not add the header, and report the bad result. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Acked-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
It turns out that for IPv4 we were reporting the ip_proto of the fragment, and for IPv6 we were not. This patch updates that behavior so that we always report the IP protocol of the fragment. In addition it takes the steps of updating the payload offset code so that we will determine the start of the payload not including the L4 header for any fragment after the first. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Acked-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
This patch corrects the logic for the IPv4 parsing so that it is consistent with how we handle IPv6. Specifically if we do not have the flow key indicating we want the addresses we still may need to take a look at the IP fragmentation bits and to see if we should stop after we have recognized the L3 header. Fixes: 807e165d ("flow_dissector: Add control/reporting of fragmentation") Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Acked-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Saeed Mahameed says: ==================== QoS and VxLAN offloads support for Mellanox 100G mlx5 driver This patch series introduces QoS IEEE dcbnl support for PFC, ETS and max rate. In addition we added VxLAN support and introduced a patch that modifies the driver to report checksum complete in RX path for all IP (tunneled and non-tunneled) traffic which is non HW LRO. This series is applied on top of the latest mlx5_ifc and NDO fixes we sent to the net tree: net/mlx5e: Use static constant netdevice ndos net/mlx5e: Remove select queue ndo initialization net/mlx5: Use offset based reserved field names in the IFC header file The QoS patches depend on the IFC change since they expose new fields in the driver/firmware API. Both QoS and VxLAN patches depend on the NDO changes, since they add new ndo entries. Changes from V1: - Fixed the S.O.B from "Matt" to "Matthew" to be aligned with the committer title. - Don't populate VxLAN/dcbnl ndos for virtual functions. - Addressed John comment on mlx5_setup_tc to be aligned with latest API changes. - Added device ETS capability check prior query/modify ets configuration. - Call mlx5e_dcbnl_ieee_setets_core at the end of mlx5e_create_netdev and don't fail netdev creation in case it failed or ETS was not supported. The series where applied on top of: ("5270c4da Merge branch 'vxlan-cleanups'") + latest mlx5 ifc and ndo fixes from net tree. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matthew Finlay authored
Add TSO and TX checksum counters for tunneled, inner packets Signed-off-by: Matthew Finlay <matt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matthew Finlay authored
Add support for TSO and TX checksum when using hw assisted, tunneled offloads. Signed-off-by: Matthew Finlay <matt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matthew Finlay authored
If a VXLAN udp dport is added to device it will: - Configure the hardware to offload the port (up to the max supported). - Advertise NETIF_F_GSO_UDP_TUNNEL and supported hw_enc_features. Signed-off-by: Matthew Finlay <matt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matthew Finlay authored
add ifndef to en.h. needed for upcoming vxlan patchset. Signed-off-by: Matthew Finlay <matt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matthew Finlay authored
Use checksum complete for all IP packets, unless they are HW LRO, in which case, use checksum unnecessary. Signed-off-by: Matthew Finlay <matt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
Implement set/get WOL by ethtool and added the needed device commands and structures to mlx5_ifc. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Rana Shahout <ranas@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
Add support for DCBNL IEEE get/set max rate. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Achiad Shochat authored
Implement the set/get DCBNL IEEE PFC callbacks. Signed-off-by: Achiad Shochat <achiad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Saeed Mahameed authored
Support the ndo_setup_tc callback and the needed methods for multi TC/UP support, and removed the default_vlan_prio from mlx5e_priv which is always 0, it was replaced with hardcoded "0" in the new select queue method. For that we now create MAX_NUM_TC num of TISs (one per prio) on netdevice creation instead of priv->params.num_tc which was always 1. So far each channel had a single TXQ, Now each channel has a TXQ per TC (Traffic Class). Added en_dcbnl.c which implements the set/get DCBNL IEEE ETS, set/get dcbx and registers the mlx5e dcbnl ops. We still use the kernel's default TXQ selection method to select the channel to transmit through but now we use our own method to select the TXQ inside the channel based on VLAN priority. In mlx5, as opposed to mlx4, tc group N gets lower priority than tc group N+1. CC: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Rana Shahout <ranas@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Saeed Mahameed authored
Add access functions to set and query a physical port TC groups and prio parameters. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Achiad Shochat authored
Add access functions to set and query a physical port PFC (Priority Flow Control) parameters. Signed-off-by: Achiad Shochat <achiad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Achiad Shochat authored
All the device physical port access functions are implemented in the port.c file. We just extract the exposure of these functions from driver.h into a dedicated header file called port.h. Signed-off-by: Achiad Shochat <achiad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Craig Gallek authored
One of the validation checks for the new array-based TCP SO_REUSEPORT validation was unintentionally dropped in ea8add2b. This adds it back. Lack of this check allows the user to allocate multiple sock_reuseport structures (leaking all but the first). Fixes: ea8add2b ("tcp/dccp: better use of ephemeral ports in bind()") Signed-off-by: Craig Gallek <kraig@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 23 Feb, 2016 14 commits
-
-
David S. Miller authored
Vivien Didelot says: ==================== net: dsa: pass bridge device to drivers This patchset simplifies the DSA layer. A switch may support multiple bridges with the same hardware VLAN. Thus a check such as dsa_bridge_check_vlan_range must be moved from the DSA layer to the concerned driver. The first purpose of this patchset is to help moving this check to the mv88e6xxx driver, which is the only one affected at the moment. To do that, pass directly the bridge net_device structure down to the DSA drivers, instead of calculating a bitmask of bridge members. The second purpose is to prepare the replacement of the complex port_vlan_getnext approach. A second patchset is ready to follow, implementing port_vlan_dump and thus simplifying the DSA slave code one more time. Note that this patchset applies on top of https://lkml.org/lkml/2016/2/5/532. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
DSA drivers may support multiple bridge groups with the same hardware VLAN. The mv88e6xxx driver which cannot yet, already has its own check for overlapping bridges. Thus remove the check from the DSA layer. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
The DSA drivers now have access to the VLAN prepare phase and the bridge net_device. It is easier to check for overlapping bridges from within the driver. Thus add such check in mv88e6xxx_port_vlan_prepare. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
Some DSA drivers may or may not support multiple software bridges on top of an hardware switch. It is more convenient for them to access the bridge's net_device for finer configuration. Removing the need to craft and access a bitmask also simplifies the code. This patch changes the signature of bridge related functions, update DSA drivers, and removes dsa_slave_br_port_mask. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Tested-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
Add a per-port mv88e6xxx_priv_port structure to store per-port related data, instead of adding several arrays of DSA_MAX_PORTS elements in the mv88e6xxx_priv_state structure. It currently only contains the port STP state. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.open-mesh.org/linux-mergeDavid S. Miller authored
Antonio Quartulli says: ==================== pull request [net-next]: batman-adv 20160223 This is a cleanup patchset: first the BATADV_BONDING_TQ_THRESHOLD constant gets removed as it was defined but not used anywhere, then all our *_free_ref functions are renamed to *_put in order to follow the kernel naming convention by Sven Eckelmann. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
We were changing return values and accidentally made rocker_world_port_obj_vlan_add() into a no-op. Fixes: fccd84d4 ('rocker: return -EOPNOTSUPP for undefined world ops') Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sven Eckelmann authored
The batman-adv source code is the only place in the kernel which uses the *_free_ref naming scheme for the *_put functions. Changing it to *_put makes it more consistent and makes it easier to understand the connection to the *_get functions. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Sven Eckelmann authored
The batman-adv source code is the only place in the kernel which uses the *_free_ref naming scheme for the *_put functions. Changing it to *_put makes it more consistent and makes it easier to understand the connection to the *_get functions. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Sven Eckelmann authored
The batman-adv source code is the only place in the kernel which uses the *_free_ref naming scheme for the *_put functions. Changing it to *_put makes it more consistent and makes it easier to understand the connection to the *_get functions. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Sven Eckelmann authored
The batman-adv source code is the only place in the kernel which uses the *_free_ref naming scheme for the *_put functions. Changing it to *_put makes it more consistent and makes it easier to understand the connection to the *_get functions. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Sven Eckelmann authored
The batman-adv source code is the only place in the kernel which uses the *_free_ref naming scheme for the *_put functions. Changing it to *_put makes it more consistent and makes it easier to understand the connection to the *_get functions. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Sven Eckelmann authored
The batman-adv source code is the only place in the kernel which uses the *_free_ref naming scheme for the *_put functions. Changing it to *_put makes it more consistent and makes it easier to understand the connection to the *_get functions. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Sven Eckelmann authored
The batman-adv source code is the only place in the kernel which uses the *_free_ref naming scheme for the *_put functions. Changing it to *_put makes it more consistent and makes it easier to understand the connection to the *_get functions. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-