- 21 Oct, 2011 18 commits
-
-
Carolyn Wyborny authored
This patch moves the DMA Coalescing feature initialization code from igb_reset to a new function and replaces it with a call to the new function. Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Carolyn Wyborny authored
In 82580 and later devices, the alternate MAC address feature is completely handled by the option ROM and software does not handle it anymore. This patch changes the check_alt_mac_addr function to exit immediately if device is 82580 or later. Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Williams, Mitch A authored
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Williams, Mitch A authored
Update adapter identification strings to properly indicate i350 VF devices in the VF driver. Change the driver ID string to remove 82576-specific wording. Update copyright date. Signed-off-by: Mitch Williams <mitch.a.williams@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Eric Dumazet authored
Adding const qualifiers to pointers can ease code review, and spot some bugs. It might allow compiler to optimize code further. For example, is it legal to temporary write a null cksum into tcphdr in tcp_md5_hash_header() ? I am afraid a sniffer could catch the temporary null value... Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mihai Maruseac authored
Instead of using the dev->next chain and trying to resync at each call to dev_seq_start, use the name hash, keeping the bucket and the offset in seq->private field. Tests revealed the following results for ifconfig > /dev/null * 1000 interfaces: * 0.114s without patch * 0.089s with patch * 3000 interfaces: * 0.489s without patch * 0.110s with patch * 5000 interfaces: * 1.363s without patch * 0.250s with patch * 128000 interfaces (other setup): * ~100s without patch * ~30s with patch Signed-off-by: Mihai Maruseac <mmaruseac@ixiacom.com> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric W. Biederman authored
On systems that create and delete lots of dynamic devices the 31bit linux ifindex fails to fit in the 16bit macvtap minor, resulting in unusable macvtap devices. I have systems running automated tests that that hit this condition in just a few days. Use a linux idr allocator to track which mavtap minor numbers are available and and to track the association between macvtap minor numbers and macvtap network devices. Remove the unnecessary unneccessary check to see if the network device we have found is indeed a macvtap device. With macvtap specific data structures it is impossible to find any other kind of networking device. Increase the macvtap minor range from 65536 to the full 20 bits that is supported by linux device numbers. It doesn't solve the original problem but there is no penalty for a larger minor device range. Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric W. Biederman authored
Place macvlan_common_newlink at the end of macvtap_newlink because failing in newlink after registering your network device is not supported. Move device_create into a netdevice creation notifier. The network device notifier is the only hook that is called after the network device has been registered with the device layer and before register_network_device returns success. Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric W. Biederman authored
To avoid leaking packets in the receive queue. Add a socket destructor that will run whenever destroy a macvtap socket. Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric W. Biederman authored
To see if it is appropriate to enable the macvtap zero copy feature don't test the lowerdev network device flags. Instead test the macvtap network device flags which are a direct copy of the lowerdev flags. This is important because nothing holds a reference to lowerdev and on a very bad day we lowerdev could be a pointer to stale memory. Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric W. Biederman authored
There is a small window in macvtap_open between looking up a networking device and calling macvtap_set_queue in which macvtap_del_queues called from macvtap_dellink. After calling macvtap_del_queues it is totally incorrect to allow macvtap_set_queue to proceed so prevent success by reporting that all of the available queues are in use. Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
We must account in skb->truesize, the size of the fragments, not the used part of them. Doing this work is important to avoid unexpected OOM situations. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> CC: Rusty Russell <rusty@rustcorp.com.au> CC: "Michael S. Tsirkin" <mst@redhat.com> CC: virtualization@lists.linux-foundation.org CC: Krishna Kumar <krkumar2@in.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
bnx2x allocates a full page per fragment. We must account in skb->truesize, the size of the fragment, not the used part of it. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> CC: Eilon Greenstein <eilong@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ian Campbell authored
I've split this bit out of the skb frag destructor patch since it helps enforce the use of the fragment API. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ian Campbell authored
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Cc: "James E.J. Bottomley" <JBottomley@parallels.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Mike Christie <michaelc@cs.wisc.edu> Cc: James Bottomley <James.Bottomley@suse.de> Cc: Karen Xie <kxie@chelsio.com> Cc: linux-scsi@vger.kernel.org Cc: netdev@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ian Campbell authored
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Cc: Casey Leedom <leedom@chelsio.com> Cc: netdev@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ian Campbell authored
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Cc: Dimitris Michailidis <dm@chelsio.com> Cc: netdev@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ian Campbell authored
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Cc: netdev@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 20 Oct, 2011 12 commits
-
-
Maciej Żenczykowski authored
Up till now the IP{,V6}_TRANSPARENT socket options (which actually set the same bit in the socket struct) have required CAP_NET_ADMIN privileges to set or clear the option. - we make clearing the bit not require any privileges. - we allow CAP_NET_ADMIN to set the bit (as before this change) - we allow CAP_NET_RAW to set this bit, because raw sockets already pretty much effectively allow you to emulate socket transparency. Signed-off-by: Maciej Żenczykowski <maze@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Preliminary patch before tcp constification Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
tcp_fin() only needs socket pointer, we can remove skb and th params. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ricardo authored
This patch enables the ethtool interface. The implementation is done using the libphy helper functions. Signed-off-by: David S. Miller <davem@davemloft.net>
-
RongQing Li authored
control these three function declarations and definitions with same macro CONFIG_PCI_IOV drivers/net/ethernet/intel/igb/igb_main.c:165: warning: ‘igb_vf_configure’ declared ‘static’ but never defined drivers/net/ethernet/intel/igb/igb_main.c:166: warning: ‘igb_find_enabled_vfs’ declared ‘static’ but never defined drivers/net/ethernet/intel/igb/igb_main.c:167: warning: ‘igb_check_vf_assignment’ declared ‘static’ but never defined Signed-off-by: RongQing Li <roy.qing.li@gmail.com> Acked-by: Greg Rose <gregory.v.rose@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
skb->truesize must account for allocated memory, not the used part of it. Doing this work is important to avoid unexpected OOM situations. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> CC: Jon Mason <mason@myri.com> Acked-by: Jon Mason <mason@myri.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
igbvf allocates half a page per skb fragment. We must account PAGE_SIZE/2 increments on skb->truesize, not the actual frag length. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> CC: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Daniel Turull reported inaccuracies in pktgen when using low packet rates, because we call ndelay(val) with values bigger than 20000. Instead of calling ndelay() for delays < 100us, we can instead loop calling ktime_now() only. Reported-by: Daniel Turull <daniel.turull@gmail.com> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Since commit 356f0398 (TCP: increase default initial receive window.), we allow sender to send 10 (TCP_DEFAULT_INIT_RCVWND) segments. Change tcp_fixup_rcvbuf() to reflect this change, even if no real change is expected, since sysctl_tcp_rmem[1] = 87380 and this value is bigger than tcp_fixup_rcvbuf() computed rcvmem (~23720) Note: Since commit 356f0398 limited default window to maximum of 10*1460 and 2*MSS, we use same heuristic in this patch. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ian Campbell authored
A few network drivers currently use skb_frag_struct for this purpose but I have patches which add additional fields and semantics there which these other uses do not want. A structure for reference sub-page regions seems like a generally useful thing so do so instead of adding a network subsystem specific structure. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Acked-by: Jens Axboe <jaxboe@fusionio.com> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
skb->truesize must account for allocated memory, not the used part of it. Doing this work is important to avoid unexpected OOM situations. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> CC: Yevgeny Petrilin <yevgenyp@mellanox.co.il> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Krishna Kumar authored
Remove manual initialization in set_skb_frag, and instead use __skb_fill_page_desc() to do the same. Patch tested on net-next. Signed-off-by: Krishna Kumar <krkumar2@in.ibm.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 19 Oct, 2011 10 commits
-
-
Ian Campbell authored
I audited all of the callers in the tree and only one of them (pktgen) expects it to do so. Taking this reference is pretty obviously confusing and error prone. In particular I looked at the following commits which switched callers of (__)skb_frag_set_page to the skb paged fragment api: 6a930b9f cxgb3: convert to SKB paged frag API. 5dc3e196 myri10ge: convert to SKB paged frag API. 0e0634d2 vmxnet3: convert to SKB paged frag API. 86ee8130 virtionet: convert to SKB paged frag API. 4a22c4c9 sfc: convert to SKB paged frag API. 18324d69 cassini: convert to SKB paged frag API. b061b39e benet: convert to SKB paged frag API. b7b6a688 bnx2: convert to SKB paged frag API. 804cf14e net: xfrm: convert to SKB frag APIs ea2ab693 net: convert core to skb paged frag APIs Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
roy.qing.li@gmail.com authored
when use dst_get_neighbour to get neighbour, we need rcu_read_lock to protect, since dst_get_neighbour uses rcu_dereference. The bug was reported by Ari Savolainen <ari.m.savolainen@gmail.com> [ 105.612095] [ 105.612096] =================================================== [ 105.612100] [ INFO: suspicious rcu_dereference_check() usage. ] [ 105.612101] --------------------------------------------------- [ 105.612103] include/net/dst.h:91 invoked rcu_dereference_check() without protection! [ 105.612105] [ 105.612106] other info that might help us debug this: [ 105.612106] [ 105.612108] [ 105.612108] rcu_scheduler_active = 1, debug_locks = 0 [ 105.612110] 1 lock held by dnsmasq/2618: [ 105.612111] #0: (rtnl_mutex){+.+.+.}, at: [<ffffffff815df8c7>] rtnl_lock+0x17/0x20 [ 105.612120] [ 105.612121] stack backtrace: [ 105.612123] Pid: 2618, comm: dnsmasq Not tainted 3.1.0-rc1 #41 [ 105.612125] Call Trace: [ 105.612129] [<ffffffff810ccdcb>] lockdep_rcu_dereference+0xbb/0xc0 [ 105.612132] [<ffffffff815dc5a9>] neigh_update+0x4f9/0x5f0 [ 105.612135] [<ffffffff815da001>] ? neigh_lookup+0xe1/0x220 [ 105.612139] [<ffffffff81639298>] arp_req_set+0xb8/0x230 [ 105.612142] [<ffffffff8163a59f>] arp_ioctl+0x1bf/0x310 [ 105.612146] [<ffffffff810baa40>] ? lock_hrtimer_base.isra.26+0x30/0x60 [ 105.612150] [<ffffffff8163fb75>] inet_ioctl+0x85/0x90 [ 105.612154] [<ffffffff815b5520>] sock_do_ioctl+0x30/0x70 [ 105.612157] [<ffffffff815b55d3>] sock_ioctl+0x73/0x280 [ 105.612162] [<ffffffff811b7698>] do_vfs_ioctl+0x98/0x570 [ 105.612165] [<ffffffff811a5c40>] ? fget_light+0x340/0x3a0 [ 105.612168] [<ffffffff811b7bbf>] sys_ioctl+0x4f/0x80 [ 105.612172] [<ffffffff816fdcab>] system_call_fastpath+0x16/0x1b Reported-by: Ari Savolainen <ari.m.savolainen@gmail.com> Signed-off-by: RongQing <roy.qing.li@gmail.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
This is just a cleanup. My testing version of Smatch warns about this: net/core/filter.c +380 check_load_and_stores(6) warn: check 'flen' for negative values flen comes from the user. We try to clamp the values here between 1 and BPF_MAXINSNS but the clamp doesn't work because it could be negative. This is a bug, but it's not exploitable. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Grant Grundler authored
"ethtool -e ethX" dumps EEPROM data. Patch sets EEPROM length for device. Ethtool works alot better when the kernel believes the length is > 0. From: Allan Chou <allan@asix.com.tw> Signed-off-by: Grant Grundler <grundler@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kevin Wilson authored
This cleanup patch removes unnecessary include from net/ipv6/ip6_fib.c. Signed-off-by: Kevin Wilson <wkevils@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gerrit Renker authored
ipv4: compat_ioctl is local to af_inet.c, make it static Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Giuseppe CAVALLARO authored
Problem using big mtu around 4096 bytes is you end allocating (4096 +NET_SKB_PAD + NET_IP_ALIGN + sizeof(struct skb_shared_info) bytes -> 8192 bytes : order-1 pages It's better to limit the mtu to SKB_MAX_HEAD(NET_SKB_PAD), to have no more than one page per skb. Also the patch changes the netdev_alloc_skb_ip_align() done in init_dma_desc_rings() and uses a variant allowing GFP_KERNEL allocations allowing the driver to load even in case of memory pressure. Reported-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Giuseppe CAVALLARO authored
This patch enhances the STMMAC driver to support CHAINED mode of descriptor. STMMAC supports DMA descriptor to operate both in dual buffer(RING) and linked-list(CHAINED) mode. In RING mode (default) each descriptor points to two data buffer pointers whereas in CHAINED mode they point to only one data buffer pointer. In CHAINED mode each descriptor will have pointer to next descriptor in the list, hence creating the explicit chaining in the descriptor itself, whereas such explicit chaining is not possible in RING mode. First version of this work has been done by Rayagond. Then the patch has been reworked avoiding ifdef inside the C code. A new header file has been added to define all the functions needed for managing enhanced and normal descriptors. In fact, these have to be specialized according to the ring/chain usage. Two new C files have been also added to implement the helper routines needed to manage: jumbo frames, chain and ring setup (i.e. desc3). Signed-off-by: Rayagond Kokatanur <rayagond@vayavyalabs.com> Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Giuseppe CAVALLARO authored
Enable the MMC support if it is actually available from the HW capability register. Signed-off-by: Rayagond Kokatanur <rayagond@vayavyalabs.com> Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rayagond Kokatanur authored
Signed-off-by: Rayagond Kokatanur <rayagond@vayavyalabs.com> Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-