- 22 Oct, 2012 6 commits
-
-
Padmanabh Ratnakar authored
Driver assumes FW resource counts and capabilities while creating queues and using functionality like RSS. This causes driver load to fail in FW configs where resources and capabilities are reduced. Fix this by querying FW configuration during probe and using resources and capabilities accordingly. Signed-off-by: Padmanabh Ratnakar <padmanabh.ratnakar@emulex.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rami Rosen authored
This patch removes double assignment of err to -EINVAL in dev_change_net_namespace(). Signed-off-by: Rami Rosen <ramirose@gmail.com> Acked-by: Serge E. Hallyn <serge.hallyn@ubuntu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
The SO_BINDTODEVICE option is the only SOL_SOCKET one that can be set, but cannot be get via sockopt API. The only way we can find the device id a socket is bound to is via sock-diag interface. But the diag works only on hashed sockets, while the opt in question can be set for yet unhashed one. That said, in order to know what device a socket is bound to (we do want to know this in checkpoint-restore project) I propose to make this option getsockopt-able and report the respective device index. Another solution to the problem might be to teach the sock-diag reporting info on unhashed sockets. Should I go this way instead? Signed-off-by: Pavel Emelyanov <xemul@parallels.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
The struct sock *other one seem to be unused. Grep and make do not object. Signed-off-by: Pavel Emelyanov <xemul@parallels.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Use the standard test for a non-zero ipv6 address. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Haiyang Zhang authored
Checked with Windows networking team, there is only one RNDIS message in each netvsc packet. Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Reviewed-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 20 Oct, 2012 1 commit
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-nextDavid S. Miller authored
Jeff Kirsher says: ==================== This series contains updates to ixgbe and igb. Alexander Duyck (13): ixgbe: Initialize q_vector cpu and affinity masks correctly ixgbe: Enable jumbo frames support w/ SR-IOV ixgbe: Move message handling routines into their own functions ixgbe: Add mailbox API version negotiation support to ixgbe PF igb: Split Rx timestamping into two separate functions igb: Do not use header split, instead receive all frames into a single buffer igb: Combine post-processing of skb into a single function igb: Map entire page and sync half instead of mapping and unmapping half pages igb: Move rx_buffer related code in Rx cleanup path into separate function igb: Lock buffer size at 2K even on systems with larger pages igb: Combine q_vector and ring allocation into a single function igb: Move the calls to set the Tx and Rx queues into igb_open igb: Split igb_update_dca into separate Tx and Rx functions Tushar Dave (1): igb: Correcting and improving small packet check and padding ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 19 Oct, 2012 29 commits
-
-
David S. Miller authored
Joachim Eastwood says; ==================== This patch series prepares the old at91_ether driver for code sharing with the macb driver. The hardware is similar except for DMA TX/RX, so its not quite clear if it is practical to support both in one driver. But stuff like MDIO and statistics should be possible to share. Patch 1 adds some register defines and bits that is only found on RM9200. Patch 2-4 uses the register defines and access functions from the macb header. These can be squashed if it cause too much churn. Patch 5 merges the private at91_ether struct with the private macb struct. This makes it easier to later share code with the macb. The private macb struct becomes quite large, but most at91_ether specific members are removed in later patches. Patch 8 make macb compile when we select at91_ether. Is this approach okey? Patch 9 makes use of MDIO code from macb. This rips out the private phy handling code in at91_ether. One thing that is lost is the interrupt support for phy. But this should easy to add to macb which will then benefit both drivers. Patch 10 makes use of the macb_set_rx_mode from macb. Patch 11-12 makes at91_ether share the rx dma struct members from macb. Patch also moves the rx buffer allocation into netdev open and dealloc into netdev close. Last patch remove the now unused rm9200 emac header from include/mach. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joachim Eastwood authored
This file is unused after at91_ether was converted to use macb.h Signed-off-by: Joachim Eastwood <manabian@gmail.com>
-
Joachim Eastwood authored
Signed-off-by: Joachim Eastwood <manabian@gmail.com>
-
Joachim Eastwood authored
This patch does two things: * Use macb struct members and remove at91_ether ones * Alloc DMA buffers on netdev start and dealloc on stop Signed-off-by: Joachim Eastwood <manabian@gmail.com>
-
Joachim Eastwood authored
Signed-off-by: Joachim Eastwood <manabian@gmail.com>
-
Joachim Eastwood authored
Signed-off-by: Joachim Eastwood <manabian@gmail.com>
-
Joachim Eastwood authored
This rips out the at91_ether phy handling and ethtool stuff and replace it with equivalent stuff from macb. The only thing lost is the phy irq support from at91_ether, but this can be added to macb and then benefit all users. Signed-off-by: Joachim Eastwood <manabian@gmail.com>
-
Joachim Eastwood authored
Comile macb as well as at91_ether to access exported functions. Signed-off-by: Joachim Eastwood <manabian@gmail.com>
-
Joachim Eastwood authored
Export some symbols to start sharing code between macb and at91_ether drivers. Signed-off-by: Joachim Eastwood <manabian@gmail.com>
-
Joachim Eastwood authored
Remove old at91_priv member and use pclk member from macb. Signed-off-by: Joachim Eastwood <manabian@gmail.com>
-
Joachim Eastwood authored
This will make it easier to share code between the drivers and eventually merge them into one driver. Signed-off-by: Joachim Eastwood <manabian@gmail.com>
-
Joachim Eastwood authored
Signed-off-by: Joachim Eastwood <manabian@gmail.com>
-
Joachim Eastwood authored
Use macb read/write funtions and remove the old at91_ether ones. Signed-off-by: Joachim Eastwood <manabian@gmail.com>
-
Joachim Eastwood authored
Use register and bits definitions from the macb header. This makes it possible to have one header file for this hardware. Process was scripted and the resulting object file has the same checksum. Signed-off-by: Joachim Eastwood <manabian@gmail.com>
-
Joachim Eastwood authored
Signed-off-by: Joachim Eastwood <manabian@gmail.com>
-
Alexander Duyck authored
This change makes it so that igb_update_dca is broken into two halves, one for Rx and one for Tx. The advantage to this is primarily readability. In addition I am enabling relaxed ordering for reads from hardware since this is supported on all of the igb parts. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This change helps to address locking issues seen with netif_set_real_num_tx_queues and netif_set_real_num_rx_queues when used in the igb_set_interrupt_capability function. To resolve these locking issues I have moved the two function calls into __igb_open so that they can be called while the RTNL lock is held. An added advantage to this is that the number of queues is not updated until the last possible moment so if there are any issues in allocating MSI-X interrupts or resources for the rings we have time to change the values prior to updating the netdev. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This change combines the the allocation of q_vectors and rings into a single function. The advantage of this is that we are guaranteed we will avoid overlap in the L1 cache sets. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This change locks us in at 2K buffers even on a system that supports larger frames. The reason for this change is to make better use of pages and to reduce the overall truesize of frames generated by igb. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
In order to try and isolate things a bit further I am moving the code related to retrieving data from the rx_buffer_info structure into a separate function. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This change makes it so that we map the entire page and just sync half of it for the device at a time. The advantage to this approach is that we can avoid the locking on map/unmap seen in many IOMMU implementations. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This change is meant to just clean-up a number of function calls that were made at the end of the Rx clean-up path by combining them into a single function call. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This change makes it so that we no longer use header split. The idea is to reduce partial cache line writes by hardware when handling frames larger then header size. We can compensate for the extra overhead of having to memcpy the header buffer by avoiding the cache misses seen by leaving an full skb allocated and sitting on the ring. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
In order to support page based receive we will need to split up the two different types of timestamping into two separate functions. The first one will handle legacy timestamps with the value in the register, and the new one will handle timestamps in the Rx buffer itself. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Matthew Vick <matthew.vick@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Tushar Dave authored
Current implementation mess up the tail pointer. This patch sets skb->tail correctly. Also, the small packet check and padding is optimized by using unlikely and calling skb_pad directly. Signed-off-by: Tushar Dave <tushar.n.dave@intel.com> Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This change allows us to add a mailbox versioning API. This will allow us to determine the features supported by the VFs from the PF. For example we will be implementing a version 1.1 API for the VF that will indicate that it can support us enabling Jumbo frames as the VF will support buffer chaining. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Robert Garrett <RobertX.Garrett@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
Instead of trying to maintain one large monolithic function that handles most of the different messages from the VF it makes sense to break the message handling function up so that we can just go through one switch statement and call the correct routine for a given message. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This change makes it so that we can have limited support for jumbo frames when SR-IOV is enabled. In order to accomplish this it is necessary to disable all VFs when the PF has jumbo frames enabled. If the VFs then request the same maximum frame size as the PF they will be re-enabled. A follow on patch will add a means of identifying when a VF can support spanning buffers and does not need to be worried about the actual supported max frame size. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Tested-by: Robert Garrett <robertx.e.garrett@intel.com> Tested-by: Sibai Li <Sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
When enabling DCB the rings belonging to a q_vector on CPU 0 were not reinitializing their DCA registers. Upon closer inspection the issue was that the q_vector CPU variable was left at 0 resulting in the driver not updating the DCA registers. In order to guarantee the DCA registers will be updated I am adding a couple line change so that we initialize the CPU variable to -1 which will force a DCA update the first time an interrupt fires on that q_vector. In addition we were setting the CPU affinity hint to all CPUs when we were not specifying a CPU. Instead we should leave it as all zeros to avoid any possible confusion about the fact that we shouldn't be giving a hint. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 18 Oct, 2012 4 commits
-
-
git://git.open-mesh.org/linux-mergeDavid S. Miller authored
Included fixes: - Fix broadcast packet CRC calculation which can lead to ~80% broadcast packet loss - Fix a race condition in duplicate broadcast packet check Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
tcp_ioctl() tries to take into account if tcp socket received a FIN to report correct number bytes in receive queue. But its flaky because if the application ate the last skb, we return 1 instead of 0. Correct way to detect that FIN was received is to test SOCK_DONE. Reported-by: Elliot Hughes <enh@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Neal Cardwell <ncardwell@google.com> Cc: Tom Herbert <therbert@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
Analyzed a few Windows driver description files, supporting this long list of devices: %ztewwan.DeviceDesc0002% = ztewwan.ndi, USB\VID_19D2&PID_0002&MI_01 %ztewwan.DeviceDesc0012% = ztewwan.ndi, USB\VID_19D2&PID_0012&MI_01 %ztewwan.DeviceDesc0017% = ztewwan.ndi, USB\VID_19D2&PID_0017&MI_03 %ztewwan.DeviceDesc0021% = ztewwan.ndi, USB\VID_19D2&PID_0021&MI_04 %ztewwan.DeviceDesc0025% = ztewwan.ndi, USB\VID_19D2&PID_0025&MI_01 %ztewwan.DeviceDesc0031% = ztewwan.ndi, USB\VID_19D2&PID_0031&MI_04 %ztewwan.DeviceDesc0042% = ztewwan.ndi, USB\VID_19D2&PID_0042&MI_04 %ztewwan.DeviceDesc0049% = ztewwan.ndi, USB\VID_19D2&PID_0049&MI_05 %ztewwan.DeviceDesc0052% = ztewwan.ndi, USB\VID_19D2&PID_0052&MI_04 %ztewwan.DeviceDesc0055% = ztewwan.ndi, USB\VID_19D2&PID_0055&MI_01 %ztewwan.DeviceDesc0058% = ztewwan.ndi, USB\VID_19D2&PID_0058&MI_04 %ztewwan.DeviceDesc0063% = ztewwan.ndi, USB\VID_19D2&PID_0063&MI_04 %ztewwan.DeviceDesc2002% = ztewwan.ndi, USB\VID_19D2&PID_2002&MI_04 %ztewwan.DeviceDesc0104% = ztewwan.ndi, USB\VID_19D2&PID_0104&MI_04 %ztewwan.DeviceDesc0113% = ztewwan.ndi, USB\VID_19D2&PID_0113&MI_05 %ztewwan.DeviceDesc0118% = ztewwan.ndi, USB\VID_19D2&PID_0118&MI_05 %ztewwan.DeviceDesc0121% = ztewwan.ndi, USB\VID_19D2&PID_0121&MI_05 %ztewwan.DeviceDesc0123% = ztewwan.ndi, USB\VID_19D2&PID_0123&MI_04 %ztewwan.DeviceDesc0124% = ztewwan.ndi, USB\VID_19D2&PID_0124&MI_05 %ztewwan.DeviceDesc0125% = ztewwan.ndi, USB\VID_19D2&PID_0125&MI_06 %ztewwan.DeviceDesc0126% = ztewwan.ndi, USB\VID_19D2&PID_0126&MI_05 %ztewwan.DeviceDesc1008% = ztewwan.ndi, USB\VID_19D2&PID_1008&MI_04 %ztewwan.DeviceDesc1010% = ztewwan.ndi, USB\VID_19D2&PID_1010&MI_04 %ztewwan.DeviceDesc1012% = ztewwan.ndi, USB\VID_19D2&PID_1012&MI_04 %ztewwan.DeviceDesc1402% = ztewwan.ndi, USB\VID_19D2&PID_1402&MI_02 %ztewwan.DeviceDesc0157% = ztewwan.ndi, USB\VID_19D2&PID_0157&MI_05 %ztewwan.DeviceDesc0158% = ztewwan.ndi, USB\VID_19D2&PID_0158&MI_03 %ztewwan.DeviceDesc1401% = ztewwan.ndi, USB\VID_19D2&PID_1401&MI_02 %ztewwan.DeviceDesc0130% = ztewwan.ndi, USB\VID_19D2&PID_0130&MI_01 %ztewwan.DeviceDesc0133% = ztewwan.ndi, USB\VID_19D2&PID_0133&MI_03 %ztewwan.DeviceDesc0176% = ztewwan.ndi, USB\VID_19D2&PID_0176&MI_03 %ztewwan.DeviceDesc0178% = ztewwan.ndi, USB\VID_19D2&PID_0178&MI_03 %ztewwan.DeviceDesc0168% = ztewwan.ndi, USB\VID_19D2&PID_0168&MI_04 ;EuFi890 %ztewwan.DeviceDesc0191% = ztewwan.ndi, USB\VID_19D2&PID_0191&MI_04 ;AL621 %ztewwan.DeviceDesc0167% = ztewwan.ndi, USB\VID_19D2&PID_0167&MI_04 ;MF821 %ztewwan.DeviceDesc0199% = ztewwan.ndi, USB\VID_19D2&PID_0199&MI_01 %ztewwan.DeviceDesc0200% = ztewwan.ndi, USB\VID_19D2&PID_0200&MI_01 %ztewwan.DeviceDesc0257% = ztewwan.ndi, USB\VID_19D2&PID_0257&MI_03 ;MF821V %ztewwan.DeviceDesc1018% = ztewwan.ndi, USB\VID_19D2&PID_1018&MI_03 ;MF91 %ztewwan.DeviceDesc1426% = ztewwan.ndi, USB\VID_19D2&PID_1426&MI_02 ;0141 %ztewwan.DeviceDesc1247% = ztewwan.ndi, USB\VID_19D2&PID_1247&MI_04 %ztewwan.DeviceDesc1425% = ztewwan.ndi, USB\VID_19D2&PID_1425&MI_02 %ztewwan.DeviceDesc1424% = ztewwan.ndi, USB\VID_19D2&PID_1424&MI_02 %ztewwan.DeviceDesc1252% = ztewwan.ndi, USB\VID_19D2&PID_1252&MI_04 %ztewwan.DeviceDesc1254% = ztewwan.ndi, USB\VID_19D2&PID_1254&MI_04 %ztewwan.DeviceDesc1255A% = ztewwan.ndi, USB\VID_19D2&PID_1255&MI_03 %ztewwan.DeviceDesc1255B% = ztewwan.ndi, USB\VID_19D2&PID_1255&MI_04 %ztewwan.DeviceDesc1256% = ztewwan.ndi, USB\VID_19D2&PID_1256&MI_04 %ztewwan.DeviceDesc1245% = ztewwanCombB.ndi, USB\VID_19D2&PID_1245&MI_04 %ztewwan.DeviceDesc1021% = ztewwan.ndi, USB\VID_19D2&PID_1021&MI_02 Adding the ones we were missing. Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
On some suspend/resume operations involving wimax device, we have noticed some intermittent memory corruptions in netlink code. Stéphane Marchesin tracked this corruption in netlink_update_listeners() and suggested a patch. It appears netlink_release() should use kfree_rcu() instead of kfree() for the listeners structure as it may be used by other cpus using RCU protection. netlink_release() must set to NULL the listeners pointer when it is about to be freed. Also have to protect netlink_update_listeners() and netlink_has_listeners() if listeners is NULL. Add a nl_deref_protected() lockdep helper to properly document which locks protects us. Reported-by: Jonathan Kliegman <kliegs@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Stéphane Marchesin <marcheu@google.com> Cc: Sam Leffler <sleffler@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-