- 04 May, 2015 40 commits
-
-
Ying Xue authored
Introducing a new function makes the purpose of tipc_subscrb_connect_cb callback routine more clear. Signed-off-by: Ying Xue <ying.xue@windriver.com> Reviewed-by: Jon Maloy <jon.maloy@ericson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ying Xue authored
When a topology server accepts a connection request from its client, it allocates a connection instance and a tipc_subscriber structure object. The former is used to communicate with client, and the latter is often treated as a subscriber which manages all subscription events requested from a same client. When a topology server receives a request of subscribing name services from a client through the connection, it creates a tipc_subscription structure instance which is seen as a subscription recording what name services are subscribed. In order to manage all subscriptions from a same client, topology server links them into the subscrp_list of the subscriber. So subscriber and subscription completely represents different meanings respectively, but function names associated with them make us so confused that we are unable to easily tell which function is against subscriber and which is to subscription. So we want to eliminate the confusion by renaming them. Signed-off-by: Ying Xue <ying.xue@windriver.com> Reviewed-by: Jon Maloy <jon.maloy@ericson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Linus Lüssing says: ==================== Exporting IGMP/MLD checking from bridge code The multicast optimizations in batman-adv are yet only usable and enabled in non-bridged scenarios. To be able to support bridged setups batman-adv needs to be able to detect IGMP/MLD queriers and reports on mesh nodes without bridges, too. See the following link for details: http://www.open-mesh.org/projects/batman-adv/wiki/Multicast-optimizations-listener-reports To avoid duplicate code between the bridge and batman-adv, the IGMP/MLD message validation code is moved from the bridge to the IPv4/IPv6 stack. On the way, some refactoring to increase readability and to iron out some subtle differences between the IGMP and MLD parsing code is done. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Linus Lüssing authored
With this patch, the IGMP and MLD message validation functions are moved from the bridge code to IPv4/IPv6 multicast files. Some small refactoring was done to enhance readibility and to iron out some differences in behaviour between the IGMP and MLD parsing code (e.g. the skb-cloning of MLD messages is now only done if necessary, just like the IGMP part always did). Finally, these IGMP and MLD message validation functions are exported so that not only the bridge can use it but batman-adv later, too. Signed-off-by: Linus Lüssing <linus.luessing@c0d3.blue> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Linus Lüssing authored
Let's use these new, neat helpers. Signed-off-by: Linus Lüssing <linus.luessing@c0d3.blue> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jamal Hadi Salim authored
improves ingress+u32 performance from 22.4 Mpps to 22.9 Mpps Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Acked-by: Florian Westphal <fw@strlen.de> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Francois Romieu says: ==================== via-rhine rework The series applies against davem-next as of 9dd3c797 ("drivers: net: xgene: fix kbuild warnings"). Patches #1..#4 avoid holes in the receive ring. Patch #5 is a small leftover cleanup for #1..#4. Patches #6 and #7 are fairly simple barrier stuff. Patch #8 closes some SMP transmit races - not that anyone really complained about these but it's a bit hard to handwave that they can be safely ignored. Some testing, especially SMP testing of course, would be welcome. . Changes since #2: - added dma_rmb barrier in vlan related patch 6. - s/wmb/dma_wmb/ in (*new*) patch 7 of 8. - added explicit SMP barriers in (*new*) patch 8 of 8. . Changes since #1: - turned wmb() into dma_wmb() as suggested by davem and Alexander Duyck in patch 1 of 6. - forgot to reset rx_head_desc in rhine_reset_rbufs in patch 4 of 6. - removed rx_head_desc altogether in (*new*) patch 5 of 6 - remoed some vlan receive uglyness in (*new*) patch 6 of 6. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
françois romieu authored
7ab87ff4 ("via-rhine: move work from irq handler to softirq and beyond") forgot to explicitely control the lifespan of the tx_dirty and tx_cur pointers. Signed-off-by: Francois Romieu <romieu@fr.zoreil.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
françois romieu authored
Follow the now usual transmit descriptor update path: 1. content change 2. dma_wmb 3. ownership change Signed-off-by: Francois Romieu <romieu@fr.zoreil.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
françois romieu authored
The NAPI receive path depends on desc->rx_status but it does not enforce any explicit receive barrier. Signed-off-by: Francois Romieu <romieu@fr.zoreil.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
françois romieu authored
The driver no longer produces holes in its receive ring so rx_head_desc only duplicates cur_rx. Signed-off-by: Francois Romieu <romieu@fr.zoreil.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
françois romieu authored
Rationales: - throttle work under memory pressure - lower receive descriptor recycling latency for the network adapter - lower the maintenance burden of uncommon paths The patch is twofold: - it fails early if the receive ring can't be completely initialized at dev->open() time - it drops packets on the floor in the napi receive handler so as to keep the received ring full Signed-off-by: Francois Romieu <romieu@fr.zoreil.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
françois romieu authored
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
françois romieu authored
It's used to initialize the receive ring but it will actually shine when the receive poll code is reworked. Signed-off-by: Francois Romieu <romieu@fr.zoreil.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
françois romieu authored
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Tom Herbert says: ==================== net: Eliminate calls to flow_dissector and introduce flow_keys_digest In this patch set we add skb_get_hash_perturb which gets the skbuff hash for a packet and perturbs it using a provided key and jhash1. This function is used in serveral qdiscs and eliminates many calls to flow_dissector and jhash3 to get a perturbed hash for a packet. To handle the sch_choke issue (passes flow_keys in skbuff cb) we add flow_keys_digest which is a digest of a flow constructed from a flow_keys structure. This is the second version of these patches I posted a while ago, and is prerequisite work to increasing the size of the flow_keys structure and hashing over it (full IPv6 address, flow label, VLAN ID, etc.). Version 2: - Add keyval parameter to __flow_hash_from_keys which allows caller to set the initval for jhash - Perturb always does flow dissection and creates hash based on input perturb value which acts as the keyval to __flow_hash_from_keys - Added a _flow_keys_digest_data which is used in make_flow_keys_digest. This fills out the digest by populating individual fields instead of copying the whole structure. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
Call make_flow_keys_digest to get a digest from flow keys and use that to pass skbuff cb and for comparing flows. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
Some users of flow keys (well just sch_choke now) need to pass flow_keys in skbuff cb, and use them for exact comparisons of flows so that skb->hash is not sufficient. In order to increase size of the flow_keys structure, we introduce another structure for the purpose of passing flow keys in skbuff cb. We limit this structure to sixteen bytes, and we will technically treat this as a digest of flow_keys struct hence its name flow_keys_digest. In the first incaranation we just copy the flow_keys structure up to 16 bytes-- this is the same information previously passed in the cb. In the future, we'll adapt this for larger flow_keys and could use something like SHA-1 over the whole flow_keys to improve the quality of the digest. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
Call skb_get_hash_perturb instead of doing skb_flow_dissect and then jhash by hand. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
Call skb_get_hash_perturb instead of doing skb_flow_dissect and then jhash by hand. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
Call skb_get_hash_perturb instead of doing skb_flow_dissect and then jhash by hand. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
Call skb_get_hash_perturb instead of doing skb_flow_dissect and then jhash by hand. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
This calls flow_disect and __skb_get_hash to procure a hash for a packet. Input includes a key to initialize jhash. This function does not set skb->hash. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Lunn authored
In setups with a global scope address on an interface, and a lesser scope address on an interface sending IGMP reports, the reports can be sent using the other interfaces global scope address rather than the local interface address. RFC 2236 suggests: Ignore the Report if you cannot identify the source address of the packet as belonging to a subnet assigned to the interface on which the packet was received. since such reports could be forged. Look at the protocol when deciding if a RT_SCOPE_LINK address should be used for the packet. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexei Starovoitov authored
TC classifiers/actions were converted to RCU by John in the series: http://thread.gmane.org/gmane.linux.network/329739/focus=329739 and many follow on patches. This is the last patch from that series that finally drops ingress spin_lock. Single cpu ingress+u32 performance goes from 22.9 Mpps to 24.5 Mpps. In two cpu case when both cores are receiving traffic on the same device and go into the same ingress+u32 the performance jumps from 4.5 + 4.5 Mpps to 23.5 + 23.5 Mpps Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Kenneth Klette Jonassen says: ==================== tcp: SACK RTTM changes for congestion control This patch series improves SACK RTT measurements for congestion control: o Picks the latest sequence SACKed for RTT, i.e. most accurate delay signal. o Calls the congestion control's pkts_acked hook with SACK RTTMs even when not sequentially ACKing new data. V2: amend misleading comment ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kenneth Klette Jonassen authored
Invoking pkts_acked is currently conditioned on FLAG_ACKED: receiving a cumulative ACK of new data, or ACK with SYN flag set. Remove this condition so that CC may get RTT measurements from all SACKs. Cc: Yuchung Cheng <ycheng@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Neal Cardwell <ncardwell@google.com> Signed-off-by: Kenneth Klette Jonassen <kennetkl@ifi.uio.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kenneth Klette Jonassen authored
tcp_sacktag_one() always picks the earliest sequence SACKed for RTT. This might not make sense for congestion control in cases where: 1. ACKs are lost, i.e. a SACK following a lost SACK covers both new and old segments at the receiver. 2. The receiver disregards the RFC 5681 recommendation to immediately ACK out-of-order segments. Give congestion control a RTT for the latest segment SACKed, which is the most accurate RTT estimate, but preserve the conservative RTT for RTO. Removes the call to skb_mstamp_get() in tcp_sacktag_one(). Cc: Yuchung Cheng <ycheng@google.com> Cc: Eric Dumazet <edumazet@google.com> Signed-off-by: Kenneth Klette Jonassen <kennetkl@ifi.uio.no> Acked-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kenneth Klette Jonassen authored
Later patch passes two values set in tcp_sacktag_one() to tcp_clean_rtx_queue(). Prepare passing them via struct tcp_sacktag_state. Acked-by: Yuchung Cheng <ycheng@google.com> Cc: Eric Dumazet <edumazet@google.com> Signed-off-by: Kenneth Klette Jonassen <kennetkl@ifi.uio.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Thomas Graf says: ==================== rhashtable self-test improvements This series improves the rhashtable self-test to: * Avoid allocation of test objects * Measure the time of test runs * Use the iterator to walk the table for consistency * Account for failed insertions due to memory pressure or utilization pressure * Ignore failed insertions when checking for consistency ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
Account for failed inserts due to memory pressure or EBUSY and ignore failed entries during the consistency check. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
As resizes may continue to run in the background, use walker to ensure we see all entries. Also print the encountered number of rehashes queued up while traversing. This may lead to warnings due to entries being seen multiple times. We consider them non-fatal. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
By far the most expensive part of the selftest was the allocation of entries. Using a static array allows to measure the rhashtable operations. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
This only blows up the size of the test structure for no gain in test coverage. Reduces size of test_obj from 24 to 16 bytes. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
Make test configurable by allowing to specify all relevant knobs through module parameters. Do several test runs and measure the average time it takes to insert & remove all entries. Note, a deferred resize might still continue to run in the background. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Alexander Duyck says: ==================== A few minor clean-ups to eth_type_trans This series addresses a few minor issues I found in eth_type_trans that that allow us to gain back something like 3 or more cycles per packet. The first change is to drop the byte swap since it isn't necessary. On x86 we could just check the first byte and compare that against the upper 8 bits of the Ethertype to determine if we are dealing with a size value or not. The second makes it so that the value we read in to test for multicast can be used for the address comparison. This allows us to avoid a second read of the destination address. The final change is to avoid some unneeded instructions in computing the Ethernet header pointer. When we start the call the Ethernet header is at skb->data, so we just use that rather than computing mac_header, and then adding that back to skb->head. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
Avoid recomputing the Ethernet header location and instead just use the pointer provided by skb->data. The problem with using eth_hdr is that the compiler wasn't smart enough to realize that skb->head + skb->mac_header was the same thing as skb->data before it added ETH_HLEN. By just caching it off before calling skb_pull_inline we can avoid a few unnecessary instructions. Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
This change makes it so that we process the address in is_multicast_ether_addr at the same size as the other calls. This allows us to avoid duplicate reads when used with other calls such as is_zero_ether_addr or eth_addr_copy. In addition I have added a 64 bit version of the function so in eth_type_trans we can process the destination address as a 64 bit value throughout. Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
This change takes advantage of the fact that ETH_P_802_3_MIN is aligned to 512 so as a result we can actually ignore the lower 8b when comparing the Ethertype to ETH_P_802_3_MIN. This allows us to avoid a byte swap by simply masking the value and comparing it to the byte swapped value for ETH_P_802_3_MIN. Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-