- 03 May, 2016 31 commits
-
-
Sven Eckelmann authored
batadv_iv_ogm_orig_del_if handles two different buffers bcast_own and bcast_own_sum which should be resized. The error handling two for allocating these buffers causes the complexity of this function. This can be avoided completely when the function is split into a main function handling the locking, freeing and call of the subfunctions. The subfunction can then independently handle the resize of the buffers. This also allows to easily reuse the old buffer (which always is larger) in case a smaller buffer could not be allocated without increasing the code complexity. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Simon Wunderlich authored
Since batadv_v_ogm_orig_update() was only called from one place and the calling function became very short, merge these two functions together. This should also reflect the protocol description of B.A.T.M.A.N. V better. Signed-off-by: Simon Wunderlich <simon@open-mesh.com> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Simon Wunderlich authored
To match our code better to the protocol description of B.A.T.M.A.N. V, move batadv_v_ogm_forward() out into batadv_v_ogm_process_per_outif() and move all checks directly deciding whether the OGM should be forwarded into batadv_v_ogm_forward(). Signed-off-by: Simon Wunderlich <simon@open-mesh.com> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Simon Wunderlich authored
Structure initialization within the macros should follow the general coding style used in the kernel: put the initialization of the first variable and the closing brace on a separate line. Reported-by: Antonio Quartulli <a@unstable.cc> Signed-off-by: Simon Wunderlich <simon.wunderlich@open-mesh.com> [sven@narfation.org: fix conflicts with current version] Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Sven Eckelmann authored
Some really long function names in batman-adv require a newline between return type and the function name. This has lead to some lines starting with *batadv_... This * belongs to the return type and thus should be on the same line as the return type. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Sven Eckelmann authored
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Sven Eckelmann authored
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Sven Eckelmann authored
checkpatch.pl warns about the use of 'unsigned' as a short form for 'unsigned int'. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Antonio Quartulli authored
Signed-off-by: Antonio Quartulli <a@unstable.cc> [sven@narfation.org: Fix additional names] Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
-
Geliang Tang authored
Use to_delayed_work() instead of open-coding it. Signed-off-by: Geliang Tang <geliangtang@163.com> Reviewed-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Geliang Tang authored
Use list_for_each_entry_safe() instead of list_for_each_safe() to simplify the code. Signed-off-by: Geliang Tang <geliangtang@163.com> Acked-by: Antonio Quartulli <a@unstable.cc> Reviewed-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Antonio Quartulli authored
Use a static string when showing table headers rather then a nonsense parametric one with fixed arguments. It is easier to grep and it does not need to be recomputed at runtime each time. Reported-by: Joe Perches <joe@perches.com> Signed-off-by: Antonio Quartulli <a@unstable.cc> [sven@narfation.org: fix conflicts with current version] Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
-
Simon Wunderlich authored
Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Sven Eckelmann authored
The sysfs ABI documentation files and the batman-adv.txt are maintained by the BATMAN ADVANCED maintainers and patches for them should therefore be sent to them. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Sven Eckelmann authored
The mailing list of b.a.t.m.a.n@lists.open-mesh.org is moderated for non-subscribers and non-whitelisted addresses. Such mails will be delayed but the sender will not be informed about the moderation. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Antonio Quartulli <a@unstable.cc>
-
Philippe Reynes authored
The ethtool api {get|set}_settings is deprecated. We move the fec_mpc52xx driver to new api {get|set}_link_ksettings. Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Philippe Reynes authored
The ethtool api {get|set}_settings is deprecated. We move the fs-enet driver to new api {get|set}_link_ksettings. Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Philippe Reynes authored
The ethtool api {get|set}_settings is deprecated. We move the ucc driver to new api {get|set}_link_ksettings. Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Philippe Reynes authored
The ethtool api {get|set}_settings is deprecated. We move the gianfar driver to new api {get|set}_link_ksettings. Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julia Lawall authored
The vsock_transport structure is never modified, so declare it as const. Done with the help of Coccinelle. Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julia Lawall authored
The xgene_cle_ops structure is never modified, so declare it as const. Done with the help of Coccinelle. Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Acked-by: Iyappan Subramanian <isubramanian@apm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
In presence of inelastic flows and stress, we can call fq_codel_drop() for every packet entering fq_codel qdisc. fq_codel_drop() is quite expensive, as it does a linear scan of 4 KB of memory to find a fat flow. Once found, it drops the oldest packet of this flow. Instead of dropping a single packet, try to drop 50% of the backlog of this fat flow, with a configurable limit of 64 packets per round. TCA_FQ_CODEL_DROP_BATCH_SIZE is the new attribute to make this limit configurable. With this strategy the 4 KB search is amortized to a single cache line per drop [1], so fq_codel_drop() no longer appears at the top of kernel profile in presence of few inelastic flows. [1] Assuming a 64byte cache line, and 1024 buckets Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Dave Taht <dave.taht@gmail.com> Cc: Jonathan Morton <chromatix99@gmail.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: Dave Taht Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kazuya Mizuguchi authored
Aligning the reception data size is not required. Signed-off-by: Kazuya Mizuguchi <kazuya.mizuguchi.ks@renesas.com> Signed-off-by: Yoshihiro Kaneko <ykaneko0929@gmail.com> Tested-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Merge tag 'wireless-drivers-next-for-davem-2016-05-02' of git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next Kalle Valo says: ==================== wireless-drivers patches for 4.7 Major changes: brcmfmac * add support for nl80211 BSS_SELECT feature mwifiex * add platform specific wakeup interrupt support ath10k * implement set_tsf() for 10.2.4 branch * remove rare MSI range support * remove deprecated firmware API 1 support ath9k * add module parameter to invert LED polarity wcn36xx * fixes to get the driver properly working on Dragonboard 410c ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Locally generated TCP GSO packets having to go through a GRE/SIT/IPIP tunnel have to go through an expensive skb_unclone() Reallocating skb->head is a lot of work. Test should really check if a 'real clone' of the packet was done. TCP does not care if the original gso_type is changed while the packet travels in the stack. This adds skb_header_unclone() which is a variant of skb_clone() using skb_header_cloned() check instead of skb_cloned(). This variant can probably be used from other points. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Westphal authored
- trans_timeout is incremented when tx queue timed out (tx watchdog). - tx_maxrate is set via sysfs Moving tx_maxrate to read-mostly part shrinks the struct by 64 bytes. While at it, also move trans_timeout (it is out-of-place in the 'write-mostly' part). Signed-off-by: Florian Westphal <fw@strlen.de> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Nikolay Aleksandrov says: ==================== bridge: per-vlan stats This set adds support for bridge per-vlan statistics. In order to be able to dump statistics for many vlans we need a way to continue dumping after reaching maximum size, thus patches 01 and 02 extend the new stats API with a per-device extended link stats attribute and callback which can save its local state and continue where it left off afterwards. I considered using the already existing "fill_xstats" callback but it gets confusing since we need to separate the linkinfo dump from the new stats api dump and adding a flag/argument to do that just looks messy. I don't think the rtnl_link_ops size is an issue, so adding these seemed like the cleaner approach. Patches 03 and 04 add the stats support and netlink dump support respectively. The stats accounting is controlled via a bridge option which is default off, thus the performance impact is kept minimal. I've tested this set with both old and modified iproute2, kmemleak on and some traffic stress tests while adding/removing vlans and ports. v3: - drop the RCU pvid patch and remove one pointer fetch as requested - make stats accounting optional with default to off, the option is in the same cache line as vlan_proto and vlan_enabled, so it is already fetched before the fast path check thus the performance impact is minimal, this also allows us to avoid one vlan lookup and return early when using pvid - rebased and retested v2: - Improve the error checking, rename lidx to prividx and save the current idx user instead of restricting it to one in patch 01 - squash patch 02 into 01 and remove the restriction - add callback descriptions, improve the size calculation and change the xstats message structure to have an embedding level per rtnl link type so we can avoid one call to get the link type (and thus filter on it) and also each link type can now have any number of private attributes inside - fix a problem where the vlan stats are not dumped if the bridge has 0 vlans on it but has vlans on the ports, add bridge link type private attributes and also add paddings for future extensions to avoid at least a few netlink attributes and improve struct alignment - drop the is_skb_forwardable argument constifying patch as it's not needed anymore, but it's a nice cleanup which I'll send separately ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nikolay Aleksandrov authored
Add a new LINK_XSTATS_TYPE_BRIDGE attribute and implement the RTM_GETSTATS callbacks for IFLA_STATS_LINK_XSTATS (fill_linkxstats and get_linkxstats_size) in order to export the per-vlan stats. The paddings were added because soon these fields will be needed for per-port per-vlan stats (or something else if someone beats me to it) so avoiding at least a few more netlink attributes. Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nikolay Aleksandrov authored
Add support for per-VLAN Tx/Rx statistics. Every global vlan context gets allocated a per-cpu stats which is then set in each per-port vlan context for quick access. The br_allowed_ingress() common function is used to account for Rx packets and the br_handle_vlan() common function is used to account for Tx packets. Stats accounting is performed only if the bridge-wide vlan_stats_enabled option is set either via sysfs or netlink. A struct hole between vlan_enabled and vlan_proto is used for the new option so it is in the same cache line. Currently it is binary (on/off) but it is intentionally restricted to exactly 0 and 1 since other values will be used in the future for different purposes (e.g. per-port stats). Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nikolay Aleksandrov authored
Add callbacks to calculate the size and fill link extended statistics which can be split into multiple messages and are dumped via the new rtnl stats API (RTM_GETSTATS) with the IFLA_STATS_LINK_XSTATS attribute. Also add that attribute to the idx mask check since it is expected to be able to save state and resume dumping (e.g. future bridge per-vlan stats will be dumped via this attribute and callbacks). Each link type should nest its private attributes under the per-link type attribute. This allows to have any number of separated private attributes and to avoid one call to get the dev link type. Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nikolay Aleksandrov authored
The new prividx argument allows the current dumping device to save a private state counter which would enable it to continue dumping from where it left off. And the idxattr is used to save the current idx user so multiple prividx using attributes can be requested at the same time as suggested by Roopa Prabhu. Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 02 May, 2016 9 commits
-
-
David S. Miller authored
Tom Herbert says: ==================== net: Cleanup IPv6 ip tunnels The IPv6 tunnel code is very different from IPv4 code. There is a lot of redundancy with the IPv4 code, particularly in the GRE tunneling. This patch set cleans up the tunnel code to make the IPv6 code look more like the IPv4 code and use common functions between the two stacks where possible. This work should make it easier to maintain and extend the IPv6 ip tunnels. Items in this patch set: - Cleanup IPv6 tunnel receive path (ip6_tnl_rcv). Includes using gro_cells and exporting ip6_tnl_rcv so the ip6_gre can call it - Move GRE functions to common header file (tx functions) or gre_demux.c (rx functions like gre_parse_header) - Call common GRE functions from IPv6 GRE - Create ip6_tnl_xmit (to be like ip_tunnel_xmit) Tested: Ran super_netperf tests for TCP_RR and TCP_STREAM for: - IPv4 over gre, gretap, gre6, gre6tap - IPv6 over gre, gretap, gre6, gre6tap - ipip - ip6ip6 - ipip/gue - IPv6 over gre/gue - IPv4 over gre/gue ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
Changes in GREv6 transmit path: - Call gre_checksum, remove gre6_checksum - Rename ip6gre_xmit2 to __gre6_xmit - Call gre_build_header utility function - Call ip6_tnl_xmit common function - Call ip6_tnl_change_mtu, eliminate ip6gre_tunnel_change_mtu Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
A few generic changes to generalize tunnels in IPv6: - Export ip6_tnl_change_mtu so that it can be called by ip6_gre - Add tun_hlen to ip6_tnl structure. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
Create common functions for both IPv4 and IPv6 GRE in transmit. These are put into gre.h. Common functions are for: - GRE checksum calculation. Move gre_checksum to gre.h. - Building a GRE header. Move GRE build_header and rename gre_build_header. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
This patch renames ip6_tnl_xmit2 to ip6_tnl_xmit and exports it. Other users like GRE will be able to call this. The original ip6_tnl_xmit function is renamed to ip6_tnl_start_xmit (this is an ndo_start_xmit function). Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
- Create gre_rcv function. This calls gre_parse_header and ip6gre_rcv. - Call ip6_tnl_rcv. Doing this and using gre_parse_header eliminates most of the code in ip6gre_rcv. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
Several of the GRE functions defined in net/ipv4/ip_gre.c are usable for IPv6 GRE implementation (that is they are protocol agnostic). These include: - GRE flag handling functions are move to gre.h - GRE build_header is moved to gre.h and renamed gre_build_header - parse_gre_header is moved to gre_demux.c and renamed gre_parse_header - iptunnel_pull_header is taken out of gre_parse_header. This is now done by caller. The header length is returned from gre_parse_header in an int* argument. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
Some basic changes to make IPv6 tunnel receive path look more like IPv4 path: - Make ip6_tnl_rcv non-static so that GREv6 and others can call it - Make ip6_tnl_rcv look like ip_tunnel_rcv - Switch to gro_cells_receive - Make ip6_tnl_rcv non-static and export it Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Eric Dumazet says: ==================== net: make TCP preemptible Most of TCP stack assumed it was running from BH handler. This is great for most things, as TCP behavior is very sensitive to scheduling artifacts. However, the prequeue and backlog processing are problematic, as they need to be flushed with BH being blocked. To cope with modern needs, TCP sockets have big sk_rcvbuf values, in the order of 16 MB, and soon 32 MB. This means that backlog can hold thousands of packets, and things like TCP coalescing or collapsing on this amount of packets can lead to insane latency spikes, since BH are blocked for too long. It is time to make UDP/TCP stacks preemptible. Note that fast path still runs from BH handler. v2: Added "tcp: make tcp_sendmsg() aware of socket backlog" to reduce latency problems of large sends. v3: Fixed a typo in tcp_cdg.c ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-