- 12 Jun, 2015 1 commit
-
-
Marcelo Ricardo Leitner authored
After db29a950 ("netfilter: conntrack: disable generic tracking for known protocols"), if the specific helper is built but not loaded (a standard for most distributions) systems with a restrictive firewall but weak configuration regarding netfilter modules to load, will silently stop working. This patch then puts a warning message so the sysadmin knows where to start looking into. It's a pr_warn_once regardless of protocol itself but it should be enough to give a hint on where to look. Cc: Florian Westphal <fw@strlen.de> Cc: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
-
- 11 Jun, 2015 39 commits
-
-
David S. Miller authored
Eric Dumazet says: ==================== tcp: defer shinfo->gso_size|type settings We put shinfo->gso_segs in TCP_SKB_CB(skb) a while back for performance reasons. This was in commit cd7d8498 ("tcp: change tcp_skb_pcount() location") This patch series complete the job for gso_size and gso_type, so that we do not bring 2 extra cache lines in tcp write xmit fast path, and making tcp_init_tso_segs() simpler and faster. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
We had various issues in the past when TCP stack was modifying gso_size/gso_segs while clones were in flight. Commit c52e2421 ("tcp: must unclone packets before mangling them") fixed these bugs and added a WARN_ON_ONCE(skb_cloned(skb)); in tcp_set_skb_tso_segs() These bugs are now fixed, and because TCP stack now only sets shinfo->gso_size|segs on the clone itself, the check can be removed. As a result of this change, compiler inlines tcp_set_skb_tso_segs() in tcp_init_tso_segs() Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
In commit cd7d8498 ("tcp: change tcp_skb_pcount() location") we stored gso_segs in a temporary cache hot location. This patch does the same for gso_size. This allows to save 2 cache line misses in tcp xmit path for the last packet that is considered but not sent because of various conditions (cwnd, tso defer, receiver window, TSQ...) Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
tcp_set_skb_tso_segs() & tcp_init_tso_segs() no longer use the sock pointer. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Our goal is to touch skb_shinfo(skb) only when absolutely needed, to avoid two cache line misses in TCP output path for last skb that is considered but not sent because of various conditions (cwnd, tso defer, receiver window, TSQ...) A packet is GSO only when skb_shinfo(skb)->gso_size is not zero. We can set skb_shinfo(skb)->gso_type to sk->sk_gso_type even for non GSO packets. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
tcp_gso_segment() and tcp_gro_receive() are not strictly part of TCP stack. They should not assume tcp_skb_mss(skb) is in fact skb_shinfo(skb)->gso_size. This will allow us to change tcp_skb_mss() in following patches. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Scott Feldman authored
Fix a BUG_ON() where CONFIG_NET_SWITCHDEV is set but the driver for a bridged port does not support switchdev_port_attr_set op. Don't BUG_ON() if -EOPNOTSUPP is returned. Also change BUG_ON() to netdev_err since this is a normal error path and does not warrant the use of BUG_ON(), which is reserved for unrecoverable errs. Signed-off-by: Scott Feldman <sfeldma@gmail.com> Reported-by: Brenden Blanco <bblanco@plumgrid.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ivan Vecera says: ==================== bna: clean-up The patches clean the bna driver. v2: changes & comments requested by Joe ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Vecera authored
...and remove some of them. It is not necessary to log when .probe() and .remove() are called or when TxQ is started or stopped. Also log level of some of them was changed to more appropriate one (link up/down, firmware loading failure. Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Vecera authored
Timeout functions are defined with 'void *' ptr argument. They should be defined directly with 'struct bfa_ioc *' type to avoid type conversions. Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Vecera authored
Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Vecera authored
Remove macros for manipulation with struct list_head and replace them with standard ones. Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Vecera authored
Pointer cmpl used to iterate through completion entries is updated at the beginning of while loop as well as at the end. The update at the end of the loop is useless. Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Vecera authored
Patch converts kzalloc->copy_from_user sequence to memdup_user. There is also one useless assignment of NULL to bnad->regdata as it is followed by assignment of kzalloc output. Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Vecera authored
Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Vecera authored
TX_E_PRIO_CHANGE event is never sent for bna_tx so it doesn't need to be handled. After this change bna_tx->flags cannot contain BNA_TX_F_PRIO_CHANGED flag and it can be also eliminated. Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Vecera authored
The bna_rx_config struct member paused can be removed as it is never written and as it cannot have non-zero value the bna_rxf struct member flags also cannot have BNA_RXF_F_PAUSED value and is always zero. So the flags member can be removed as well as bna_rxf_flags enum and the code-paths that needs to have non-zero bna_rxf->flags. This clean-up makes bna_rxf_sm_paused state unsed and can be also removed. Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Vecera authored
RXF_E_PAUSE & RXF_E_RESUME events are never sent for bna_rxf object so they needn't to be handled. The bna_rxf's state bna_rxf_sm_fltr_clr_wait and function bna_rxf_fltr_clear are unused after this so remove them also. Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Vecera authored
Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Vecera authored
Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Vecera authored
Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Vecera authored
removed: bna_rx_ucast_add bna_rx_ucast_del simplified: bna_enet_pause_config bna_rx_mcast_delall bna_rx_mcast_listset bna_rx_mode_set bna_rx_ucast_listset bna_rx_ucast_set Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Vecera authored
Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Vecera authored
replaced macros: BNA_MAC_IS_EQUAL -> ether_addr_equal BNA_POWER_OF_2 -> is_power_of_2 BNA_TO_POWER_OF_2_HIGH -> roundup_pow_of_two removed unused macros: bfa_fsm_get_state bfa_ioc_clr_stats bfa_ioc_fetch_stats bfa_ioc_get_alt_ioc_fwstate bfa_ioc_isr_mode_set bfa_ioc_maxfrsize bfa_ioc_mbox_cmd_pending bfa_ioc_ownership_reset bfa_ioc_rx_bbcredit bfa_ioc_state_disabled bfa_sm_cmp_state bfa_sm_get_state bfa_sm_send_event bfa_sm_set_state bfa_sm_state_decl BFA_STRING_32 BFI_ADAPTER_IS_{PROTO,TTV,UNSUPP) BFI_IOC_ENDIAN_SIG BNA_{C,RX,TX}Q_PAGE_INDEX_MAX BNA_{C,RX,TX}Q_PAGE_INDEX_MAX_SHIFT BNA_{C,RX,TX}Q_QPGE_PTR_GET BNA_IOC_TIMER_FREQ BNA_MESSAGE_SIZE BNA_QE_INDX_2_PTR BNA_QE_INDX_RANGE BNA_Q_GET_{C,P}I BNA_Q_{C,P}I_ADD BNA_Q_FREE_COUNT BNA_Q_IN_USE_COUNT BNA_TO_POWER_OF_2 containing_rec Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Vecera authored
Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Vecera authored
The patch converts mac_t type to widely used 'u8 [ETH_ALEN]'. Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Vecera authored
Parameters of all ether_addr_copy instances were checked for proper alignment. Alignment of bnad_bcast_addr is forced to 2 as the implicit alignment is 1. I have also renamed address parameter of bnad_set_mac_address() to addr. The name mac_addr was a little bit confusing as the real parameter is struct sockaddr *. v2: added __aligned directive to bnad_bcast_addr, renamed parameter of bnad_set_mac_address() (thx joe@perches.com) Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Or Gerlitz says: ==================== mlx5 Ethernet driver update - Jun 11 2015 This series from Saeed, Achiad and Gal contains few fixes to the recently introduced mlx5 Ethernet functionality. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Achiad Shochat authored
Allocate and use transport domain by the Ethernet driver code. Signed-off-by: Achiad Shochat <achiad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Achiad Shochat authored
Each transport object, namely TIR and TIS, must have a transport domain number (TDN) identifier. The driver wrongly assumed that it is OK to use TDN=0 without explicit TDN allocation from the device. The TDN will also be used for isolating different processes once user mode Ethernet will be supported. Signed-off-by: Achiad Shochat <achiad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Saeed Mahameed authored
When NETIF_F_SG is set, each send WQE may have a different size since each skb can have different number of fragments as of LSO header etc. This implies that a given WQE may wrap around the send queue, i.e begin at its end and continue at its start. While it is legal by the device spec, we preferred a solution that avoids it - when building of current WQE is done, if the next WQE may wrap around the send queue, fill the send queue with NOPs WQEs till its end, so that the next WQE will begin at send queue start. NOP WQE for itself cannot wrap around the send queue since it is of minimal size - 64 bytes, and all send WQEs are a multiple of that size. Signed-off-by: Achiad Shochat <achiad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gal Pressman authored
The Ethernet driver requires at least 3 flow table levels to operate, enforce that. Signed-off-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Saeed Mahameed authored
We need to resolve a HW configuration issue for enabling HW CVLAN insertion. Meanwhile, no need to implement the VLAN insertion in the driver, rather use the generic kernel VLAN insertion method. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Saeed Mahameed authored
Enable HW cacheline start padding and align RX WQE size to cacheline while considering HW start padding. Also, fix dma_unmap call to use the correct SKB data buffer size. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Saeed Mahameed authored
Previously we configured HW MTU to be netdev->mtu, actually we need to configure netdev->mtu + (ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN). Also, query MTU can not fail, hence make the relevant helper a void functionm, add mlx5e_set_dev_port_mtu, helper function to handle MTU setting. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
We return success if mlx5e_alloc_sq_db() fails but we should return an error code. Fixes: f62b8bb8 ('net/mlx5: Extend mlx5_core to support ConnectX-4 Ethernet functionality') Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fabian Frederick authored
Use kernel.h macro definition. Thanks to Julia Lawall for Coccinelle scripting support. Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fabian Frederick authored
Use kernel.h macro definition. Thanks to Julia Lawall for Coccinelle scripting support. Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fabian Frederick authored
Use kernel.h macro definition. Thanks to Julia Lawall for Coccinelle scripting support. Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: David S. Miller <davem@davemloft.net>
-