- 11 Feb, 2016 30 commits
-
-
Hariprasad Shenai authored
Rename local mm* variables to more meaningful names Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Troy Kisky says: ==================== net: fec: cleanup/fixes V2 is a rebase on top of johannes endian-safe patch and is only the 1st eight patches. The testing for this series was done on a nitrogen6x. The base commit was commit b45efa30 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net Testing showed no change in performance. Testing used imx_v6_v7_defconfig + CONFIG_MICREL_PHY. The processor was running at 996Mhz. The following commands were used to get the transfer rates. On an x86 ubunto system, iperf -s -i.5 -u On a nitrogen6x board, running via SD Card. I first stopped some background processes stop cron stop upstart-file-bridge stop upstart-socket-bridge stop upstart-udev-bridge stop rsyslog stop dbus killall dhclient echo performance >/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor taskset 0x2 iperf -c 192.168.0.201 -u -t60 -b500M -r There is a branch available on github with this series, and the rest of my fec patches, for those who would like to test it. https://github.com:boundarydevices/linux-imx6.git branch net-next_master ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Troy Kisky authored
Unmap initial buffer on error. Don't free skb until it has been unmapped. Move cbd_bufaddr assignment closer to the mapping function. Signed-off-by: Troy Kisky <troy.kisky@boundarydevices.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Troy Kisky authored
If you don't own it, you shouldn't write to it. Signed-off-by: Troy Kisky <troy.kisky@boundarydevices.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Troy Kisky authored
Only the interrupt routine processes this condition. Signed-off-by: Troy Kisky <troy.kisky@boundarydevices.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Troy Kisky authored
There is no need for complex macros every time we need to activate a queue. Also, no need to call skb_get_queue_mapping when we already know which queue it is using. Signed-off-by: Troy Kisky <troy.kisky@boundarydevices.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Troy Kisky authored
This reduces code and gains speed. Signed-off-by: Troy Kisky <troy.kisky@boundarydevices.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Troy Kisky authored
When first initialized, cur_tx points to the 1st entry in the queue, and dirty_tx points to the last. At this point, fec_enet_get_free_txdesc_num will return tx_ring_size -2. If tx_ring_size -2 entries are now queued, then fec_enet_get_free_txdesc_num should return 0, but it returns tx_ring_size instead. Signed-off-by: Troy Kisky <troy.kisky@boundarydevices.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Troy Kisky authored
On an overrun, the other flags are not valid, so don't check them. Also, don't pass bad frames up the stack. Signed-off-by: Troy Kisky <troy.kisky@boundarydevices.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Troy Kisky authored
Setting the FTRL register will stop the fec from trying to use multiple receive buffers. Signed-off-by: Troy Kisky <troy.kisky@boundarydevices.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nikolay Aleksandrov authored
No need to require the bond down while changing these settings, the change will be reflected immediately and the 3ad mode will sort itself out. For faster convergence set port->ntt to true in order to generate new LACPDUs immediately. CC: Jay Vosburgh <j.vosburgh@gmail.com> CC: Veaceslav Falico <vfalico@gmail.com> CC: Andy Gospodarek <gospo@cumulusnetworks.com> Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Johannes Berg authored
In certain 802.11 wireless deployments, there will be NA proxies that use knowledge of the network to correctly answer requests. To prevent unsolicitd advertisements on the shared medium from being a problem, on such deployments wireless needs to drop them. Enable this by providing an option called "drop_unsolicited_na". Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Johannes Berg authored
In order to solve a problem with 802.11, the so-called hole-196 attack, add an option (sysctl) called "drop_unicast_in_l2_multicast" which, if enabled, causes the stack to drop IPv6 unicast packets encapsulated in link-layer multi- or broadcast frames. Such frames can (as an attack) be created by any member of the same wireless network and transmitted as valid encrypted frames since the symmetric key for broadcast frames is shared between all stations. Reviewed-by: Julian Anastasov <ja@ssi.bg> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Johannes Berg authored
In certain 802.11 wireless deployments, there will be ARP proxies that use knowledge of the network to correctly answer requests. To prevent gratuitous ARP frames on the shared medium from being a problem, on such deployments wireless needs to drop them. Enable this by providing an option called "drop_gratuitous_arp". Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Johannes Berg authored
In order to solve a problem with 802.11, the so-called hole-196 attack, add an option (sysctl) called "drop_unicast_in_l2_multicast" which, if enabled, causes the stack to drop IPv4 unicast packets encapsulated in link-layer multi- or broadcast frames. Such frames can (as an attack) be created by any member of the same wireless network and transmitted as valid encrypted frames since the symmetric key for broadcast frames is shared between all stations. Additionally, enabling this option provides compliance with a SHOULD clause of RFC 1122. Reviewed-by: Julian Anastasov <ja@ssi.bg> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Siva Reddy Kallam authored
Signed-off-by: Siva Reddy Kallam <siva.kallam@broadcom.com> Signed-off-by: Michael Chan <mchan@broadcom.com> Acked-by: Prashant Sreedharan <prashant@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wei Tang authored
This patch fixes the checkpatch.pl error to bpf_dbg.c: ERROR: do not initialise statics to 0 Signed-off-by: Wei Tang <tangwei@cmss.chinamobile.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Ahern authored
Add support for filtering link dumps by master device and kind, similar to the filtering implemented for neighbor dumps. Each net_device that exists adds between 1196 bytes (eth) and 1556 bytes (bridge) to the link dump. As the number of interfaces increases so does the amount of data pushed to user space for a link list. If the user only wants to see a list of specific devices (e.g., interfaces enslaved to a specific bridge or a list of VRFs) most of that data is thrown away. Passing the filters to the kernel to have only relevant data returned makes the dump more efficient. Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Acked-by: Roopa Prabhu <roopa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Craig Gallek says: ==================== Faster SO_REUSEPORT for TCP This patch series complements an earlier series (6a5ef90c) which added faster SO_REUSEPORT lookup for UDP sockets by extending the feature to TCP sockets. It uses the same array-based data structure which allows for socket selection after finding the first listening socket that matches an incoming packet. Prior to this feature, every socket in the reuseport group needed to be found and examined before a selection could be made. With this series the SO_ATTACH_REUSEPORT_CBPF and SO_ATTACH_REUSEPORT_EBPF socket options now work for TCP sockets as well. The test at the end of the series includes an example of how to use these options to select a reuseport socket based on the cpu core id handling the incoming packet. There are several refactoring patches that precede the feature implementation. Only the last two patches in this series should result in any behavioral changes. v4 - Fix build issue when compiling IPv6 as a module. This required moving the ipv6_rcv_saddr_equal into an object that is included as a built-in object. I included this change in the second patch which adds inet6_hash since that is where ipv6_rcv_saddr_equal will later be called from non-module code. v3: - Another warning in the first patch caught by a build bot. Return 0 in the no-op UDP hash function. v2: - In the first patched I missed a couple of hash functions that should now be returning int instead of void. I missed these the first time through as it only generated a warning and not an error :\ ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Craig Gallek authored
Unfortunately the existing test relied on packet payload in order to map incoming packets to sockets. In order to get this to work with TCP, TCP_FASTOPEN needed to be used. Since the fast open path is slightly different than the standard TCP path, I created a second test which sends to reuseport group members based on receiving cpu core id. This will probably serve as a better real-world example use as well. Signed-off-by: Craig Gallek <kraig@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Craig Gallek authored
This change extends the fast SO_REUSEPORT socket lookup implemented for UDP to TCP. Listener sockets with SO_REUSEPORT and the same receive address are additionally added to an array for faster random access. This means that only a single socket from the group must be found in the listener list before any socket in the group can be used to receive a packet. Previously, every socket in the group needed to be considered before handing off the incoming packet. This feature also exposes the ability to use a BPF program when selecting a socket from a reuseport group. Signed-off-by: Craig Gallek <kraig@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Craig Gallek authored
Both of the lines in this patch probably should have been included in the initial implementation of this code for generic socket support, but weren't technically necessary since only UDP sockets were supported. First, the sk_reuseport_cb points to a structure which assumes each socket in the group has this pointer assigned at the same time it's added to the array in the structure. The sk_clone_lock function breaks this assumption. Since a child socket shouldn't implicitly be in a reuseport group, the simple fix is to clear the field in the clone. Second, the SO_ATTACH_REUSEPORT_xBPF socket options require that SO_REUSEPORT also be set first. For UDP sockets, this is easily enforced at bind-time since that process both puts the socket in the appropriate receive hlist and updates the reuseport structures. Since these operations can happen at two different times for TCP sockets (bind and listen) it must be explicitly checked to enforce the use of SO_REUSEPORT with SO_ATTACH_REUSEPORT_xBPF in the setsockopt call. Signed-off-by: Craig Gallek <kraig@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Craig Gallek authored
This is a preliminary step to allow fast socket lookup of SO_REUSEPORT groups. Doing so with a BPF filter will require access to the skb in question. This change plumbs the skb (and offset to payload data) through the call stack to the listening socket lookup implementations where it will be used in a following patch. Signed-off-by: Craig Gallek <kraig@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Craig Gallek authored
tcp_hdrlen is wasteful if you already have a pointer to struct tcphdr. This splits the size calculation into a helper function that can be used if a struct tcphdr is already available. Signed-off-by: Craig Gallek <kraig@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Craig Gallek authored
In order to support fast lookups for TCP sockets with SO_REUSEPORT, the function that adds sockets to the listening hash set needs to be able to check receive address equality. Since this equality check is different for IPv4 and IPv6, we will need two different socket hashing functions. This patch adds inet6_hash identical to the existing inet_hash function and updates the appropriate references. A following patch will differentiate the two by passing different comparison functions to __inet_hash. Additionally, in order to use the IPv6 address equality function from inet6_hashtables (which is compiled as a built-in object when IPv6 is enabled) it also needs to be in a built-in object file as well. This moves ipv6_rcv_saddr_equal into inet_hashtables to accomplish this. Signed-off-by: Craig Gallek <kraig@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Craig Gallek authored
In order to support fast reuseport lookups in TCP, the hash function defined in struct proto must be capable of returning an error code. This patch changes the function signature of all related hash functions to return an integer and handles or propagates this return value at all call sites. Signed-off-by: Craig Gallek <kraig@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.open-mesh.org/linux-mergeDavid S. Miller authored
Antonio Quartulli says: ==================== Here you have a batch of patches by Sven Eckelmann that drops our private reference counting implementation and substitutes it with the kref objects/functions. Then you have a patch, by Simon Wunderlich, that makes the broadcast protection window code more generic so that it can be re-used in the future by other components with different requirements. Lastly, Sven is also introducing two lockdep asserts in functions operating on our TVLV container list, to make sure that the proper lock is always acquired by the users. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ajit Khaparde says: ==================== be2net Patch series Please consider applying these two patches to net-next Patch-1: Request RSS capability of Rx interface depending on number of Rx rings Patch-2: Interpret and log new data that's added to the port misconfigure async event ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ajit Khaparde authored
>From FW version 11.0. onwards, the PORT_MISCONFIG event generated by the FW will carry more information about the event in the "data_word1" and "data_word2" fields. This patch adds support in the driver to parse the new information and log it accordingly. This patch also changes some of the messages that are being logged currently. Signed-off-by: Suresh Reddy <suresh.reddy@broadcom.com> Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ajit Khaparde authored
Currently we request RSS capability even if a single Rx ring is created. As a result in few cases we unnecessarily consume an RSS capable interface which is a limited resource in the chip. This patch enables RSS on an interface only if more than one Rx ring is created. Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 10 Feb, 2016 10 commits
-
-
Sven Eckelmann authored
batman-adv uses a self-written reference implementation which is just based on atomic_t. This is less obvious when reading the code than kref and therefore increases the change that the reference counting will be missed. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Sven Eckelmann authored
batman-adv uses a self-written reference implementation which is just based on atomic_t. This is less obvious when reading the code than kref and therefore increases the change that the reference counting will be missed. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Sven Eckelmann authored
batman-adv uses a self-written reference implementation which is just based on atomic_t. This is less obvious when reading the code than kref and therefore increases the change that the reference counting will be missed. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Sven Eckelmann authored
batman-adv uses a self-written reference implementation which is just based on atomic_t. This is less obvious when reading the code than kref and therefore increases the change that the reference counting will be missed. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Sven Eckelmann authored
batman-adv uses a self-written reference implementation which is just based on atomic_t. This is less obvious when reading the code than kref and therefore increases the change that the reference counting will be missed. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Sven Eckelmann authored
batman-adv uses a self-written reference implementation which is just based on atomic_t. This is less obvious when reading the code than kref and therefore increases the change that the reference counting will be missed. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Sven Eckelmann authored
batman-adv uses a self-written reference implementation which is just based on atomic_t. This is less obvious when reading the code than kref and therefore increases the change that the reference counting will be missed. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Sven Eckelmann authored
batman-adv uses a self-written reference implementation which is just based on atomic_t. This is less obvious when reading the code than kref and therefore increases the change that the reference counting will be missed. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Sven Eckelmann authored
batman-adv uses a self-written reference implementation which is just based on atomic_t. This is less obvious when reading the code than kref and therefore increases the change that the reference counting will be missed. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Sven Eckelmann authored
batman-adv uses a self-written reference implementation which is just based on atomic_t. This is less obvious when reading the code than kref and therefore increases the change that the reference counting will be missed. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-