- 12 Oct, 2017 32 commits
-
-
Ursula Braun authored
For RoCEs ib_query_gid() takes a reference count on the net_device. This reference count must be decreased by the caller. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reported-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Parav Pandit <parav@mellanox.com> Fixes: 0cfdd8f9 ("smc: connection and link group creation") Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
SMC should not open code the function pointer get_netdev of the IB device. Replacing ib_query_gid(..., NULL) with ib_query_gid(..., gid_attr) allows access to the netdev. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Suggested-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Parav Pandit <parav@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Florian Fainelli says: ==================== Enable ACB for bcm_sf2 and bcmsysport This patch series enables Broadcom's Advanced Congestion Buffering mechanism which requires cooperation between the CPU/Management Ethernet MAC controller and the switch. I took the notifier approach because ultimately the information we need to carry to the master network device is DSA specific and I saw little room for generalizing beyond what DSA requires. Chances are that this is highly specific to the Broadcom HW as I don't know of any HW out there that supports something nearly similar for similar or identical needs. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
Now that we have established the queue mapping between the switch port egress queues and the SYSTEMPORT egress queues, we can turn on Advanced Congestion Buffering (ACB) at the SYSTEMPORT level. This enables the Ethernet MAC controller to get out of band flow control information directly from the switch port and queue that it monitors such that its internal TDMA can be appropriately backpressured. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
Turn on the out of band Advanced Congestion Buffering (ACB) mechanism at the switch level now that we have properly established the queue mapping between the switch egress queues and the SYSTEMPORT egress queues. This allows the switch to correctly backpressure the host system when one of its queue drops below the configured thresholds. This is also helping achieve so called "lossless" behavior by adapting the TX interrupt pacing to the actual speed and capacity of the switch port. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
Establish a queue mapping between the DSA slave network device queues created that correspond to switch port queues, and the transmit queue that SYSTEMPORT manages. We need to configure the SYSTEMPORT transmit queue with the switch port number and switch port queue number in order for the switch and SYSTEMPORT hardware to utilize the out of band congestion notification. This hardware mechanism works by looking at the switch port egress queue and determines whether there is enough buffers for this queue, with that class of service for a successful transmission and if not, backpressures the SYSTEMPORT queue that is being used. For this to work, we implement a notifier which looks at the DSA_PORT_REGISTER event. When DSA network devices are registered, the framework calls the DSA notifiers when that happens, extracts the number of queues for these devices and their associated port number, remembers that in the driver private structure and linearly maps those queues to TX rings/queues that we manage. This scheme works because DSA slave network deviecs always transmit through SYSTEMPORT so when DSA slave network devices are destroyed/brought down, the corresponding SYSTEMPORT queues are no longer used. Also, by design of the DSA framework, the master network device (SYSTEMPORT) is registered first. For faster lookups we use an array of up to DSA_MAX_PORTS * number of queues per port, and then map pointers to bcm_sysport_tx_ring such that our ndo_select_queue() implementation can just index into that array to locate the corresponding ring index. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
We need to tell the DSA master network device doing the actual transmission what the desired switch port and queue number is for it to resolve that to the internal transmit queue it is mapped to. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
In preparation for communicating a given DSA network device's port number and switch index, create a specialized DSA notifier and two events: DSA_PORT_REGISTER and DSA_PORT_UNREGISTER that communicate: the slave network device (slave_dev), port number and switch number in the tree. This will be later used for network device drivers like bcmsysport which needs to cooperate with its DSA network devices to set-up queue mapping and scheduling. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Timur Tabi authored
This reverts commit df1ec1b9. It turns out that memory allocated via dma_alloc_coherent is always aligned to the size of the buffer, so there's no way the RRD and RFD can ever be in separate 32-bit regions. Signed-off-by: Timur Tabi <timur@codeaurora.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Remove three inline helpers that are no longer needed. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
Variable old_flags is being assigned but is never read; it is redundant and can be removed. Cleans up clang warning: Value stored to 'old_flags' is never read Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Tariq Toukan says: ==================== mlx4_en XDP TX improvements This patchset contains performance improvements to the XDP_TX use case in the mlx4 Eth driver. Patch 1 is a simple change in a function parameter type. Patch 2 replaces a call to a generic function with the relevant parts inlined. Patch 3 moves the write of descriptors' constant values from data path to control path. Series generated against net-next commit: 833e0e2f net: dst: move cpu inside ifdef to avoid compilation warning ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
In XDP_TX, some fields in tx_info and tx_desc are constants across all entries of the different XDP_TX rings. Assign values to these fields on ring creation time, rather than in data-path. Patchset performance tests: Tested on ConnectX3Pro, Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz Single queue no-RSS optimization ON. XDP_TX packet rate: ------------------------------ Before | After | Gain | 13.7 Mpps | 14.0 Mpps | %2.2 | ------------------------------ Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
Function mlx4_en_tx_write_desc() is not optimized to use of XDP xmit. Use the relevant parts inline instead. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
The struct net_device parameter was passed only to extract struct mlx4_en_priv out of it. Here we pass the priv parameter directly. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
The function ipgre_mpls_encap_hlen is local to the source and does not need to be in global scope, so make it static. Cleans up sparse warning: symbol 'ipgre_mpls_encap_hlen' was not declared. Should it be static? Fixes: bdc47641 ("ip_tunnel: add mpls over gre support") Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
The array sctp_sched_ops is local to the source and does not need to be in global scope, so make it static. Cleans up sparse warning: symbol 'sctp_sched_ops' was not declared. Should it be static? Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Westphal authored
Similar to the previous patch, use the device lookup functions that bump device refcount and flag this as DOIT_UNLOCKED to avoid rtnl mutex. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Westphal authored
Instead of relying on rtnl mutex bump device reference count. After this change, values reported can change in parallel, but thats not much different from current state, as anyone can change the settings right after rtnl_unlock (and before userspace processed reply). While at it, switch to GFP_KERNEL allocation. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Jiri Pirko says: ==================== net: sched: get rid of cls_flower->egress_dev Introduction of cls_flower->egress_dev was a workaround. Turned out to be a bit ugly hack. So replace it with more generic and reusable infrastructure. This is a dependency of shared block introduction that will be send as a follow-up patchsets group. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
The helper and the struct field ares no longer used by any code, so remove them. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
The only user of cls_flower->egress_dev is mlx5. So do the conversion there alongside with the code originating the call in cls_flower function fl_hw_replace_filter to the newly introduced egress device callback infrastucture. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Introduce infrastructure that allows drivers to register callbacks that are called whenever tc would offload inserted rule and specified device acts as tc action egress device. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Return dev directly, NULL if not possible. That is enough. Makes no sense to pass struct net * to get_dev op, as there is only one net possible, the one the action was created in. So just store it in mirred priv and use directly. Rename the mirred op callback function. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Subash Abhinov Kasiviswanathan says: ==================== net: qualcomm: rmnet: Rewrite some existing functionality This series fixes some of the broken rmnet functionality. Bridge mode is re-written and made useable and the muxed_ep is converted to hlist. Patches 1-5 are cleanups in preparation for these changes. Patch 6 does the hlist conversion. Patch 7 has the implementation of the rmnet bridge mode. v1->v2: Fix the warning and code style issue in rmnet_rx_handler as mentioned by David. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Subash Abhinov Kasiviswanathan authored
Add support to bridge two devices which can send multiplexing and aggregation (MAP) data. This is done only when the data itself is not going to be consumed in the stack but is being passed on to a different endpoint. This is mainly used for testing. Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Subash Abhinov Kasiviswanathan authored
Rather than using a static array, use a hlist to store the muxed endpoints and use the mux id to query the rmnet_device. This is useful as usually very few mux ids are used. Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> Cc: Dan Williams <dcbw@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Subash Abhinov Kasiviswanathan authored
The rmnet_devices information is already stored in muxed_ep, so storing this in rmnet_devices[] again is redundant. Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Subash Abhinov Kasiviswanathan authored
The end point is set twice in the local_ep as well as the mux_id and the real_dev in the rmnet private structure. Remove the local_ep. While these elements are equivalent, rmnet_endpoint will be used only as part of the rmnet_port for muxed scenarios in VND mode. Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Subash Abhinov Kasiviswanathan authored
Mode information on the real device makes it easier to route packets to rmnet device or bridged device based on the configuration. Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Subash Abhinov Kasiviswanathan authored
Most of these constants were used in the initial patchset where custom netlink configuration was used and hence are no longer relevant. Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Subash Abhinov Kasiviswanathan authored
This will be rewritten in the following patches. Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 11 Oct, 2017 8 commits
-
-
David S. Miller authored
Timur Tabi says: ==================== net: qcom/emac: various minor fixes A set of patches for 4.15 that clean up some code, apply minors fixes, and so on. Some of the code also prepares the driver for a future version of the EMAC controller. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Timur Tabi authored
Some of the error messages that are printed by the interrupt handlers are poorly written. For example, many don't include a device prefix, so there's no indication that they are EMAC errors. Also use rate limiting for all messages that could be printed from interrupt context. Signed-off-by: Timur Tabi <timur@codeaurora.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Timur Tabi authored
The EMAC has a restriction that the upper 32 bits of the base addresses for the RFD and RRD rings must be the same. The ensure that restriction, we allocate twice the space for the RRD and locate it at an appropriate address. We also re-arrange the allocations so that invalid addresses are even less likely. Signed-off-by: Timur Tabi <timur@codeaurora.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Timur Tabi authored
The EMAC is capable of multiple TX and RX rings, but the driver only supports one ring for each. One function had some left-over unused code that supports multiple rings, but all it did was make the code harder to read. Signed-off-by: Timur Tabi <timur@codeaurora.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Timur Tabi authored
The 64/32-bit DMA mask hackery in the EMAC driver is not actually necessary, and is technically not accurate. The EMAC hardware is limted to a 45-bit DMA address. Although no EMAC-enabled system can have that much DDR, an IOMMU could possible provide a larger address. Rather than play games with the DMA mappings, the driver should provide a correct value and trust the DMA/IOMMU layers to do the right thing. Signed-off-by: Timur Tabi <timur@codeaurora.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Bjorn Andersson says: ==================== net: qrtr: Fixes and support receiving version 2 packets On the latest Qualcomm platforms remote processors are sending packets with version 2 of the message header. This series starts off with some fixes and then refactors the qrtr code to support receiving messages of both version 1 and version 2. As all remotes are backwards compatible transmitted packets continues to be send as version 1, but some groundwork has been done to make this a per-link property. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjorn Andersson authored
Add the necessary logic for decoding incoming messages of version 2 as well. Also make sure there's room for the bigger of version 1 and 2 headers in the code allocating skbs for outgoing messages. Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjorn Andersson authored
Rather than parsing the header of incoming messages throughout the implementation do it once when we retrieve the message and store the relevant information in the "cb" member of the sk_buff. This allows us to, in a later commit, decode version 2 messages into this same structure. Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-