- 02 Nov, 2013 30 commits
-
-
Bjørn Mork authored
Converting the constants used in these comparisons at build time instead of converting the variables for every received frame at run time. Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
These signatures are well known bit patterns, mostly made up of ascii characters. Mentally parsing works best if they are printed in hex. Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
Take advantage of standard device name prefixing and netdevice msglvl control where possible. Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
Fix cut'n'paste typo. Log the bogus length and not the irrelevant signature. Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
usbnet use the hard_mtu value for sizing the tx queue and nothing else. We will be transmitting buffers of up to tx_max size, so that's the proper value to give usbnet. The individual datagram size is completely irrelevant here. Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
No need to keep this code duplicated from usbnet. Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
These functions were merely wrappers around the usbnet variants. Remove them. Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
Padding NTBs to max size is part of the support for devices optimizing their DMA transfers. This optimization depends on max sized NTBs not being ZLP terminated. So we are much better off dropping the padding if we are going to send a ZLP anyway. Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
The probed interface must be the master/control interface of the function. Make this explicit and simplify redundant tests. Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
header_desc was completely unused and union_desc was never used outside cdc_ncm_bind_common. Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
We need to inform the device about the *new* value, not the old one. Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
Moving the call to cdc_ncm_setup() after the endpoint setup removes the last remaining reference to ncm_parm outside cdc_ncm_setup. Collecting all the ncm_parm based calculations in cdc_ncm_setup improves readability. Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
These fields are only used to prevent printing the same speeds multiple times if we receive multiple identical speed notifications. The value of these printk's is questionable, and even more so when we filter out some of the notifications sent us by the firmware. If we are going to print any of these, then we should print them all. Removing little used fields is a bonus. Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
We already use the usbnet udev field everywhere this could have been used. Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
Too many pointers back and forth are likely to confuse developers, creating subtle bugs whenever we forget to syncronize them all. As a usbnet driver, we should stick with the standard struct usbnet fields as much as possible. The netdevice is one such field. Cc: Greg Suarez <gsuarez@smithmicro.com> Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
No need to duplicate stuff already in the common usbnet struct. We still need to keep our special find_endpoints function because we need explicit control over the selected altsetting. Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
This is always a duplicate of the "control" field. It causes confusion wrt intf_data updates and cleanups. Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
This makes it a lot easier to test modified versions Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
We can avoid the costly division for the common case where we pad the frame to tx_max size as long as we ensure that tx_max is either the device specified dwNtbOutMaxSize or not a multiplum of wMaxPacketSize. Using the preconverted 'maxpacket' field avoids converting wMaxPacketSize to CPU endianness for every transmitted frame And since we only will hit the one byte padding rule for short frames, we can drop testing the skb for tailroom. The change means that tx_max now represents the real maximum skb size, enabling us to allocate the correct size instead of always making room for one extra byte. Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
A number of devices in the wild have turned out to require ZLPs. Even if this is a spec violation, our priority is to make any device work as good as possible. Devices needing ZLPs will fail to receive any full sized frame we send. On the other hand, devices which do not need the ZLP will still work if we send them. This gives us no other option than sending ZLPs by default. This will prevent devices conforming to the spec from making the optimizations which are possible without ZLPs. Adding known such devices to a whitelist, to avoid the possible negative impact of the new spec violating default. Cc: Greg Suarez <gsuarez@smithmicro.com> Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
MBIM is a point-to-point protocol transporting raw IP packets with no L2 headers. Only IPv4 and IPv6 are supported. ARP in particular is not, which is quite logical given the lack of L2 headers. The driver still emulates an ethernet interface, dropping all unsupported protocols, and avoiding neigbour resolving by setting the IFF_NOARP flag. The MBIM specification does not explicitly forbid IPv6 Neighbor Discovery, and it seems the other OS support will respond to Neighbor Solicitations on MBIM links. There are therefore buggy devices out there, which despite the pointlessness, still require Neighbor Discovery for IPv6 over MBIM. This is incompatible with the IFF_NOARP flag which disables both ARP and ND. We cannot support ARP in any case, so we have to keep that flag. This patch implements a workaround for the buggy devices, letting the driver respond directly to Neighbor Solicitations from the device. This is not optimal, but will have minimal effect on any sane device. Cc: Greg Suarez <gsuarez@smithmicro.com> Reported-and-tested-by: Thomas Schäfer <tschaefer@t-online.de> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Boeckel authored
Signed-off-by: Ben Boeckel <mathstuf@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Boeckel authored
Also snipes some trailing whitespace. Signed-off-by: Ben Boeckel <mathstuf@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Boeckel authored
Also snipes some whitespace errors. Signed-off-by: Ben Boeckel <mathstuf@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Boeckel authored
Signed-off-by: Ben Boeckel <mathstuf@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Boeckel authored
Also fixes an incorrect function comment (probably copy/paste). Signed-off-by: Ben Boeckel <mathstuf@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Boeckel authored
Also snipes some whitespace errors. Signed-off-by: Ben Boeckel <mathstuf@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Boeckel authored
Also snipes some whitespace errors. Signed-off-by: Ben Boeckel <mathstuf@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-nextDavid S. Miller authored
Jeff Kirsher says: ==================== This series contains updates to e1000, igb, ixgbe and ixgbevf. Hong Zhiguo provides a fix for e1000 where tx_ring and adapter->tx_ring are already of type "struct e1000_tx_ring" so no need to divide by e1000_tx_ring size in the idx calculation. Emil provides a fix for ixgbevf to remove a redundant workaround related to header split and a fix for ixgbe to resolve an issue where the MTA table can be cleared when the interface is reset while in promisc mode. Todd provides a fix for igb to prevent ethtool from writing to the iNVM in i210/i211 devices. This issue was reported by Marek Vasut <marex@denx.de>. Anton Blanchard provides a fix for ixgbe to reduce memory consumption with larger page sizes, seen on PPC. Don provides a cleanup in ixgbe to replace the IXGBE_DESC_UNUSED macro with the inline function ixgbevf_desc_unused() to make the logic a bit more readable. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/bwh/sfc-nextDavid S. Miller authored
Ben Hutchings says: ==================== A single fix by Alexandre Rames for the recent changes to TSO. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 01 Nov, 2013 6 commits
-
-
Emil Tantilov authored
This patch resolves an issue where the MTA table can be cleared when the interface is reset while in promisc mode. As result IPv6 traffic between VFs will be interrupted. This patch makes the update of the MTA table unconditional to avoid the inconsistent clearing on reset. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Don Skidmore authored
This patch just replaces the IXGBE_DESC_UNUSED macro with a like named inline function ixgbevf_desc_unused. The inline function makes the logic a bit more readable. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Anton Blanchard authored
The ixgbe driver allocates pages for its receive rings. It currently uses 512 pages, regardless of page size. During receive handling it adds the unused part of the page back into the rx ring, avoiding the need for a new allocation. On a ppc64 box with 64 threads and 64kB pages, we end up with 512 entries * 64 rx queues * 64kB = 2GB memory used. Even more of a concern is that we use up 2GB of IOMMU space in order to map all this memory. The driver makes a number of decisions based on if PAGE_SIZE is less than 8kB, so use this as the breakpoint and only allocate 128 entries on 8kB or larger page sizes. Signed-off-by: Anton Blanchard <anton@samba.org> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Fujinaka, Todd authored
Don't let ethtool try to write to iNVM in i210/i211. This fixes an issue seen by Marek Vasut. Reported-by: Marek Vasut <marex@denx.de> Signed-off-by: Todd Fujinaka <todd.fujinaka@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Emil Tantilov authored
This patch removes a workaround related to header split, which is redundant because the driver does not support splitting packet headers on Rx. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Hong Zhiguo authored
tx_ring and adapter->tx_ring are already of type "struct e1000_tx_ring *" Signed-off-by: Hong Zhiguo <zhiguohong@tencent.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 31 Oct, 2013 1 commit
-
-
Alexandre Rames authored
When using firmware assisted TSO, we use a single DMA mapping for the linear area of a TSO skb. We still have to segment the super-packet and insert a descriptor containing the original headers before each segment of payload, so we can unmap the linear area only after the last segment is completed. The unmapping information for the linear area is therefore associated with the last header descriptor. We calculate the DMA address to unmap from using the map length and the invariant that the end of the DMA mapping matches the end of the data referenced by the last descriptor. But this invariant is broken when there is TCP payload in the linear area. Fix this by adding and using an explicit dma_offset field. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
-
- 30 Oct, 2013 3 commits
-
-
David S. Miller authored
Alexander Aring says: ==================== This patch series cleanup the 6LoWPAN header creation and extend the use of skb_*_header functions. Patch 2/4 fix issues of parsing the mac header. The ieee802.15.4 header has a dynamic size which depends on frame control bits. This patch replaces the static mac header len calculation with a dynamic one. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Aring authored
This patch drops the direct memcpy on skb and uses the right skb memcpy functions. Also remove an unnecessary check if plen is non zero. Signed-off-by: Alexander Aring <alex.aring@gmail.com> Reviewed-by: Werner Almesberger <werner@almesberger.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Aring authored
This is necessary to access network header with the skb_network_header function instead of calculate the position with mac_len, etc. Do the same for the transport header, when we replace the IPv6 header with the 6LoWPAN header. Signed-off-by: Alexander Aring <alex.aring@gmail.com> Acked-by: Werner Almesberger <werner@almesberger.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-