Commit 71ba22fa authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'upstream-linus' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6

* 'upstream-linus' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6: (75 commits)
  Ethernet driver for EISA only SNI RM200/RM400 machines
  Extract chip specific code out of lasi_82596.c
  ehea: Whitespace cleanup
  pasemi_mac: Fix TX interrupt threshold
  spidernet: Replace literal with const
  r8169: perform RX config change after mac filtering
  r8169: mac address change support
  r8169: display some extra debug information during startup
  r8169: add endianess annotations to [RT]xDesc
  r8169: align the IP header when there is no DMA constraint
  r8169: add bit description for the TxPoll register
  r8169: cleanup
  r8169: remove the media option
  r8169: small 8101 comment
  r8169: confusion between hardware and IP header alignment
  r8169: merge with version 8.001.00 of Realtek's r8168 driver
  r8169: merge with version 6.001.00 of Realtek's r8169 driver
  r8169: prettify mac_version
  r8169: populate the hw_start handler for the 8110
  r8169: populate the hw_start handler for the 8168
  ...
parents 27a278aa f2ec8030
The Spidernet Device Driver
===========================
Written by Linas Vepstas <linas@austin.ibm.com>
Version of 7 June 2007
Abstract
========
This document sketches the structure of portions of the spidernet
device driver in the Linux kernel tree. The spidernet is a gigabit
ethernet device built into the Toshiba southbridge commonly used
in the SONY Playstation 3 and the IBM QS20 Cell blade.
The Structure of the RX Ring.
=============================
The receive (RX) ring is a circular linked list of RX descriptors,
together with three pointers into the ring that are used to manage its
contents.
The elements of the ring are called "descriptors" or "descrs"; they
describe the received data. This includes a pointer to a buffer
containing the received data, the buffer size, and various status bits.
There are three primary states that a descriptor can be in: "empty",
"full" and "not-in-use". An "empty" or "ready" descriptor is ready
to receive data from the hardware. A "full" descriptor has data in it,
and is waiting to be emptied and processed by the OS. A "not-in-use"
descriptor is neither empty or full; it is simply not ready. It may
not even have a data buffer in it, or is otherwise unusable.
During normal operation, on device startup, the OS (specifically, the
spidernet device driver) allocates a set of RX descriptors and RX
buffers. These are all marked "empty", ready to receive data. This
ring is handed off to the hardware, which sequentially fills in the
buffers, and marks them "full". The OS follows up, taking the full
buffers, processing them, and re-marking them empty.
This filling and emptying is managed by three pointers, the "head"
and "tail" pointers, managed by the OS, and a hardware current
descriptor pointer (GDACTDPA). The GDACTDPA points at the descr
currently being filled. When this descr is filled, the hardware
marks it full, and advances the GDACTDPA by one. Thus, when there is
flowing RX traffic, every descr behind it should be marked "full",
and everything in front of it should be "empty". If the hardware
discovers that the current descr is not empty, it will signal an
interrupt, and halt processing.
The tail pointer tails or trails the hardware pointer. When the
hardware is ahead, the tail pointer will be pointing at a "full"
descr. The OS will process this descr, and then mark it "not-in-use",
and advance the tail pointer. Thus, when there is flowing RX traffic,
all of the descrs in front of the tail pointer should be "full", and
all of those behind it should be "not-in-use". When RX traffic is not
flowing, then the tail pointer can catch up to the hardware pointer.
The OS will then note that the current tail is "empty", and halt
processing.
The head pointer (somewhat mis-named) follows after the tail pointer.
When traffic is flowing, then the head pointer will be pointing at
a "not-in-use" descr. The OS will perform various housekeeping duties
on this descr. This includes allocating a new data buffer and
dma-mapping it so as to make it visible to the hardware. The OS will
then mark the descr as "empty", ready to receive data. Thus, when there
is flowing RX traffic, everything in front of the head pointer should
be "not-in-use", and everything behind it should be "empty". If no
RX traffic is flowing, then the head pointer can catch up to the tail
pointer, at which point the OS will notice that the head descr is
"empty", and it will halt processing.
Thus, in an idle system, the GDACTDPA, tail and head pointers will
all be pointing at the same descr, which should be "empty". All of the
other descrs in the ring should be "empty" as well.
The show_rx_chain() routine will print out the the locations of the
GDACTDPA, tail and head pointers. It will also summarize the contents
of the ring, starting at the tail pointer, and listing the status
of the descrs that follow.
A typical example of the output, for a nearly idle system, might be
net eth1: Total number of descrs=256
net eth1: Chain tail located at descr=20
net eth1: Chain head is at 20
net eth1: HW curr desc (GDACTDPA) is at 21
net eth1: Have 1 descrs with stat=x40800101
net eth1: HW next desc (GDACNEXTDA) is at 22
net eth1: Last 255 descrs with stat=xa0800000
In the above, the hardware has filled in one descr, number 20. Both
head and tail are pointing at 20, because it has not yet been emptied.
Meanwhile, hw is pointing at 21, which is free.
The "Have nnn decrs" refers to the descr starting at the tail: in this
case, nnn=1 descr, starting at descr 20. The "Last nnn descrs" refers
to all of the rest of the descrs, from the last status change. The "nnn"
is a count of how many descrs have exactly the same status.
The status x4... corresponds to "full" and status xa... corresponds
to "empty". The actual value printed is RXCOMST_A.
In the device driver source code, a different set of names are
used for these same concepts, so that
"empty" == SPIDER_NET_DESCR_CARDOWNED == 0xa
"full" == SPIDER_NET_DESCR_FRAME_END == 0x4
"not in use" == SPIDER_NET_DESCR_NOT_IN_USE == 0xf
The RX RAM full bug/feature
===========================
As long as the OS can empty out the RX buffers at a rate faster than
the hardware can fill them, there is no problem. If, for some reason,
the OS fails to empty the RX ring fast enough, the hardware GDACTDPA
pointer will catch up to the head, notice the not-empty condition,
ad stop. However, RX packets may still continue arriving on the wire.
The spidernet chip can save some limited number of these in local RAM.
When this local ram fills up, the spider chip will issue an interrupt
indicating this (GHIINT0STS will show ERRINT, and the GRMFLLINT bit
will be set in GHIINT1STS). When the RX ram full condition occurs,
a certain bug/feature is triggered that has to be specially handled.
This section describes the special handling for this condition.
When the OS finally has a chance to run, it will empty out the RX ring.
In particular, it will clear the descriptor on which the hardware had
stopped. However, once the hardware has decided that a certain
descriptor is invalid, it will not restart at that descriptor; instead
it will restart at the next descr. This potentially will lead to a
deadlock condition, as the tail pointer will be pointing at this descr,
which, from the OS point of view, is empty; the OS will be waiting for
this descr to be filled. However, the hardware has skipped this descr,
and is filling the next descrs. Since the OS doesn't see this, there
is a potential deadlock, with the OS waiting for one descr to fill,
while the hardware is waiting for a different set of descrs to become
empty.
A call to show_rx_chain() at this point indicates the nature of the
problem. A typical print when the network is hung shows the following:
net eth1: Spider RX RAM full, incoming packets might be discarded!
net eth1: Total number of descrs=256
net eth1: Chain tail located at descr=255
net eth1: Chain head is at 255
net eth1: HW curr desc (GDACTDPA) is at 0
net eth1: Have 1 descrs with stat=xa0800000
net eth1: HW next desc (GDACNEXTDA) is at 1
net eth1: Have 127 descrs with stat=x40800101
net eth1: Have 1 descrs with stat=x40800001
net eth1: Have 126 descrs with stat=x40800101
net eth1: Last 1 descrs with stat=xa0800000
Both the tail and head pointers are pointing at descr 255, which is
marked xa... which is "empty". Thus, from the OS point of view, there
is nothing to be done. In particular, there is the implicit assumption
that everything in front of the "empty" descr must surely also be empty,
as explained in the last section. The OS is waiting for descr 255 to
become non-empty, which, in this case, will never happen.
The HW pointer is at descr 0. This descr is marked 0x4.. or "full".
Since its already full, the hardware can do nothing more, and thus has
halted processing. Notice that descrs 0 through 254 are all marked
"full", while descr 254 and 255 are empty. (The "Last 1 descrs" is
descr 254, since tail was at 255.) Thus, the system is deadlocked,
and there can be no forward progress; the OS thinks there's nothing
to do, and the hardware has nowhere to put incoming data.
This bug/feature is worked around with the spider_net_resync_head_ptr()
routine. When the driver receives RX interrupts, but an examination
of the RX chain seems to show it is empty, then it is probable that
the hardware has skipped a descr or two (sometimes dozens under heavy
network conditions). The spider_net_resync_head_ptr() subroutine will
search the ring for the next full descr, and the driver will resume
operations there. Since this will leave "holes" in the ring, there
is also a spider_net_resync_tail_ptr() that will skip over such holes.
As of this writing, the spider_net_resync() strategy seems to work very
well, even under heavy network loads.
The TX ring
===========
The TX ring uses a low-watermark interrupt scheme to make sure that
the TX queue is appropriately serviced for large packet sizes.
For packet sizes greater than about 1KBytes, the kernel can fill
the TX ring quicker than the device can drain it. Once the ring
is full, the netdev is stopped. When there is room in the ring,
the netdev needs to be reawakened, so that more TX packets are placed
in the ring. The hardware can empty the ring about four times per jiffy,
so its not appropriate to wait for the poll routine to refill, since
the poll routine runs only once per jiffy. The low-watermark mechanism
marks a descr about 1/4th of the way from the bottom of the queue, so
that an interrupt is generated when the descr is processed. This
interrupt wakes up the netdev, which can then refill the queue.
For large packets, this mechanism generates a relatively small number
of interrupts, about 1K/sec. For smaller packets, this will drop to zero
interrupts, as the hardware can empty the queue faster than the kernel
can fill it.
======= END OF DOCUMENT ========
......@@ -3049,6 +3049,16 @@ S: Maintained
RISCOM8 DRIVER
S: Orphan
RTL818X WIRELESS DRIVER
P: Michael Wu
M: flamingice@sourmilk.net
P: Andrea Merello
M: andreamrl@tiscali.it
L: linux-wireless@vger.kernel.org
W: http://linuxwireless.org/
T: git kernel.org:/pub/scm/linux/kernel/git/mwu/mac80211-drivers.git
S: Maintained
S3 SAVAGE FRAMEBUFFER DRIVER
P: Antonino Daplas
M: adaplas@gmail.com
......
......@@ -34,6 +34,11 @@ config PHANTOM
If you choose to build module, its name will be phantom. If unsure,
say N here.
config EEPROM_93CX6
tristate "EEPROM 93CX6 support"
---help---
This is a driver for the EEPROM chipsets 93c46 and 93c66.
The driver supports both read as well as write commands.
If unsure, say N.
......@@ -187,5 +192,4 @@ config THINKPAD_ACPI_BAY
If you are not sure, say Y here.
endmenu
......@@ -14,3 +14,4 @@ obj-$(CONFIG_PHANTOM) += phantom.o
obj-$(CONFIG_SGI_IOC4) += ioc4.o
obj-$(CONFIG_SONY_LAPTOP) += sony-laptop.o
obj-$(CONFIG_THINKPAD_ACPI) += thinkpad_acpi.o
obj-$(CONFIG_EEPROM_93CX6) += eeprom_93cx6.o
/*
Copyright (C) 2004 - 2006 rt2x00 SourceForge Project
<http://rt2x00.serialmonkey.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the
Free Software Foundation, Inc.,
59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
/*
Module: eeprom_93cx6
Abstract: EEPROM reader routines for 93cx6 chipsets.
Supported chipsets: 93c46 & 93c66.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/version.h>
#include <linux/delay.h>
#include <linux/eeprom_93cx6.h>
MODULE_AUTHOR("http://rt2x00.serialmonkey.com");
MODULE_VERSION("1.0");
MODULE_DESCRIPTION("EEPROM 93cx6 chip driver");
MODULE_LICENSE("GPL");
static inline void eeprom_93cx6_pulse_high(struct eeprom_93cx6 *eeprom)
{
eeprom->reg_data_clock = 1;
eeprom->register_write(eeprom);
/*
* Add a short delay for the pulse to work.
* According to the specifications the "maximum minimum"
* time should be 450ns.
*/
ndelay(450);
}
static inline void eeprom_93cx6_pulse_low(struct eeprom_93cx6 *eeprom)
{
eeprom->reg_data_clock = 0;
eeprom->register_write(eeprom);
/*
* Add a short delay for the pulse to work.
* According to the specifications the minimal time
* should be 450ns so a 1us delay is sufficient.
*/
udelay(1);
}
static void eeprom_93cx6_startup(struct eeprom_93cx6 *eeprom)
{
/*
* Clear all flags, and enable chip select.
*/
eeprom->register_read(eeprom);
eeprom->reg_data_in = 0;
eeprom->reg_data_out = 0;
eeprom->reg_data_clock = 0;
eeprom->reg_chip_select = 1;
eeprom->register_write(eeprom);
/*
* kick a pulse.
*/
eeprom_93cx6_pulse_high(eeprom);
eeprom_93cx6_pulse_low(eeprom);
}
static void eeprom_93cx6_cleanup(struct eeprom_93cx6 *eeprom)
{
/*
* Clear chip_select and data_in flags.
*/
eeprom->register_read(eeprom);
eeprom->reg_data_in = 0;
eeprom->reg_chip_select = 0;
eeprom->register_write(eeprom);
/*
* kick a pulse.
*/
eeprom_93cx6_pulse_high(eeprom);
eeprom_93cx6_pulse_low(eeprom);
}
static void eeprom_93cx6_write_bits(struct eeprom_93cx6 *eeprom,
const u16 data, const u16 count)
{
unsigned int i;
eeprom->register_read(eeprom);
/*
* Clear data flags.
*/
eeprom->reg_data_in = 0;
eeprom->reg_data_out = 0;
/*
* Start writing all bits.
*/
for (i = count; i > 0; i--) {
/*
* Check if this bit needs to be set.
*/
eeprom->reg_data_in = !!(data & (1 << (i - 1)));
/*
* Write the bit to the eeprom register.
*/
eeprom->register_write(eeprom);
/*
* Kick a pulse.
*/
eeprom_93cx6_pulse_high(eeprom);
eeprom_93cx6_pulse_low(eeprom);
}
eeprom->reg_data_in = 0;
eeprom->register_write(eeprom);
}
static void eeprom_93cx6_read_bits(struct eeprom_93cx6 *eeprom,
u16 *data, const u16 count)
{
unsigned int i;
u16 buf = 0;
eeprom->register_read(eeprom);
/*
* Clear data flags.
*/
eeprom->reg_data_in = 0;
eeprom->reg_data_out = 0;
/*
* Start reading all bits.
*/
for (i = count; i > 0; i--) {
eeprom_93cx6_pulse_high(eeprom);
eeprom->register_read(eeprom);
/*
* Clear data_in flag.
*/
eeprom->reg_data_in = 0;
/*
* Read if the bit has been set.
*/
if (eeprom->reg_data_out)
buf |= (1 << (i - 1));
eeprom_93cx6_pulse_low(eeprom);
}
*data = buf;
}
/**
* eeprom_93cx6_read - Read multiple words from eeprom
* @eeprom: Pointer to eeprom structure
* @word: Word index from where we should start reading
* @data: target pointer where the information will have to be stored
*
* This function will read the eeprom data as host-endian word
* into the given data pointer.
*/
void eeprom_93cx6_read(struct eeprom_93cx6 *eeprom, const u8 word,
u16 *data)
{
u16 command;
/*
* Initialize the eeprom register
*/
eeprom_93cx6_startup(eeprom);
/*
* Select the read opcode and the word to be read.
*/
command = (PCI_EEPROM_READ_OPCODE << eeprom->width) | word;
eeprom_93cx6_write_bits(eeprom, command,
PCI_EEPROM_WIDTH_OPCODE + eeprom->width);
/*
* Read the requested 16 bits.
*/
eeprom_93cx6_read_bits(eeprom, data, 16);
/*
* Cleanup eeprom register.
*/
eeprom_93cx6_cleanup(eeprom);
}
EXPORT_SYMBOL_GPL(eeprom_93cx6_read);
/**
* eeprom_93cx6_multiread - Read multiple words from eeprom
* @eeprom: Pointer to eeprom structure
* @word: Word index from where we should start reading
* @data: target pointer where the information will have to be stored
* @words: Number of words that should be read.
*
* This function will read all requested words from the eeprom,
* this is done by calling eeprom_93cx6_read() multiple times.
* But with the additional change that while the eeprom_93cx6_read
* will return host ordered bytes, this method will return little
* endian words.
*/
void eeprom_93cx6_multiread(struct eeprom_93cx6 *eeprom, const u8 word,
__le16 *data, const u16 words)
{
unsigned int i;
u16 tmp;
for (i = 0; i < words; i++) {
tmp = 0;
eeprom_93cx6_read(eeprom, word + i, &tmp);
data[i] = cpu_to_le16(tmp);
}
}
EXPORT_SYMBOL_GPL(eeprom_93cx6_multiread);
......@@ -107,11 +107,6 @@ MODULE_PARM_DESC (multicast_filter_limit, "8139cp: maximum number of filtered mu
#define PFX DRV_NAME ": "
#ifndef TRUE
#define FALSE 0
#define TRUE (!FALSE)
#endif
#define CP_DEF_MSG_ENABLE (NETIF_MSG_DRV | \
NETIF_MSG_PROBE | \
NETIF_MSG_LINK)
......@@ -661,7 +656,7 @@ static irqreturn_t cp_interrupt (int irq, void *dev_instance)
if (status & (TxOK | TxErr | TxEmpty | SWInt))
cp_tx(cp);
if (status & LinkChg)
mii_check_media(&cp->mii_if, netif_msg_link(cp), FALSE);
mii_check_media(&cp->mii_if, netif_msg_link(cp), false);
spin_unlock(&cp->lock);
......@@ -1188,7 +1183,7 @@ static int cp_open (struct net_device *dev)
goto err_out_hw;
netif_carrier_off(dev);
mii_check_media(&cp->mii_if, netif_msg_link(cp), TRUE);
mii_check_media(&cp->mii_if, netif_msg_link(cp), true);
netif_start_queue(dev);
return 0;
......@@ -2050,7 +2045,7 @@ static int cp_resume (struct pci_dev *pdev)
spin_lock_irqsave (&cp->lock, flags);
mii_check_media(&cp->mii_if, netif_msg_link(cp), FALSE);
mii_check_media(&cp->mii_if, netif_msg_link(cp), false);
spin_unlock_irqrestore (&cp->lock, flags);
......
This diff is collapsed.
......@@ -157,6 +157,7 @@ obj-$(CONFIG_ELPLUS) += 3c505.o
obj-$(CONFIG_AC3200) += ac3200.o 8390.o
obj-$(CONFIG_APRICOT) += 82596.o
obj-$(CONFIG_LASI_82596) += lasi_82596.o
obj-$(CONFIG_SNI_82596) += sni_82596.o
obj-$(CONFIG_MVME16x_NET) += 82596.o
obj-$(CONFIG_BVME6000_NET) += 82596.o
obj-$(CONFIG_SC92031) += sc92031.o
......
......@@ -159,10 +159,6 @@ static struct pci_device_id acenic_pci_tbl[] = {
};
MODULE_DEVICE_TABLE(pci, acenic_pci_tbl);
#ifndef SET_NETDEV_DEV
#define SET_NETDEV_DEV(net, pdev) do{} while(0)
#endif
#define ace_sync_irq(irq) synchronize_irq(irq)
#ifndef offset_in_page
......
......@@ -4,7 +4,7 @@
#
config ARM_AM79C961A
bool "ARM EBSA110 AM79C961A support"
depends on NET_ETHERNET && ARM && ARCH_EBSA110
depends on ARM && ARCH_EBSA110
select CRC32
help
If you wish to compile a kernel for the EBSA-110, then you should
......@@ -12,21 +12,21 @@ config ARM_AM79C961A
config ARM_ETHER1
tristate "Acorn Ether1 support"
depends on NET_ETHERNET && ARM && ARCH_ACORN
depends on ARM && ARCH_ACORN
help
If you have an Acorn system with one of these (AKA25) network cards,
you should say Y to this option if you wish to use it with Linux.
config ARM_ETHER3
tristate "Acorn/ANT Ether3 support"
depends on NET_ETHERNET && ARM && ARCH_ACORN
depends on ARM && ARCH_ACORN
help
If you have an Acorn system with one of these network cards, you
should say Y to this option if you wish to use it with Linux.
config ARM_ETHERH
tristate "I-cubed EtherH/ANT EtherM support"
depends on NET_ETHERNET && ARM && ARCH_ACORN
depends on ARM && ARCH_ACORN
select CRC32
help
If you have an Acorn system with one of these network cards, you
......@@ -34,7 +34,7 @@ config ARM_ETHERH
config ARM_AT91_ETHER
tristate "AT91RM9200 Ethernet support"
depends on NET_ETHERNET && ARM && ARCH_AT91RM9200
depends on ARM && ARCH_AT91RM9200
select MII
help
If you wish to compile a kernel for the AT91RM9200 and enable
......@@ -42,7 +42,7 @@ config ARM_AT91_ETHER
config EP93XX_ETH
tristate "EP93xx Ethernet support"
depends on NET_ETHERNET && ARM && ARCH_EP93XX
depends on ARM && ARCH_EP93XX
help
This is a driver for the ethernet hardware included in EP93xx CPUs.
Say Y if you are building a kernel for EP93xx based devices.
......@@ -15,6 +15,7 @@
#include <linux/ethtool.h>
#include <linux/mii.h>
#include <linux/if_ether.h>
#include <linux/if_vlan.h>
#include <linux/etherdevice.h>
#include <linux/pci.h>
#include <linux/delay.h>
......@@ -68,8 +69,8 @@
(BP)->tx_cons - (BP)->tx_prod - TX_RING_GAP(BP))
#define NEXT_TX(N) (((N) + 1) & (B44_TX_RING_SIZE - 1))
#define RX_PKT_BUF_SZ (1536 + bp->rx_offset + 64)
#define TX_PKT_BUF_SZ (B44_MAX_MTU + ETH_HLEN + 8)
#define RX_PKT_OFFSET 30
#define RX_PKT_BUF_SZ (1536 + RX_PKT_OFFSET + 64)
/* minimum number of free TX descriptors required to wake up TX process */
#define B44_TX_WAKEUP_THRESH (B44_TX_RING_SIZE / 4)
......@@ -599,8 +600,7 @@ static void b44_timer(unsigned long __opaque)
spin_unlock_irq(&bp->lock);
bp->timer.expires = jiffies + HZ;
add_timer(&bp->timer);
mod_timer(&bp->timer, round_jiffies(jiffies + HZ));
}
static void b44_tx(struct b44 *bp)
......@@ -653,7 +653,7 @@ static int b44_alloc_rx_skb(struct b44 *bp, int src_idx, u32 dest_idx_unmasked)
src_map = &bp->rx_buffers[src_idx];
dest_idx = dest_idx_unmasked & (B44_RX_RING_SIZE - 1);
map = &bp->rx_buffers[dest_idx];
skb = dev_alloc_skb(RX_PKT_BUF_SZ);
skb = netdev_alloc_skb(bp->dev, RX_PKT_BUF_SZ);
if (skb == NULL)
return -ENOMEM;
......@@ -669,7 +669,7 @@ static int b44_alloc_rx_skb(struct b44 *bp, int src_idx, u32 dest_idx_unmasked)
if (!dma_mapping_error(mapping))
pci_unmap_single(bp->pdev, mapping, RX_PKT_BUF_SZ,PCI_DMA_FROMDEVICE);
dev_kfree_skb_any(skb);
skb = __dev_alloc_skb(RX_PKT_BUF_SZ,GFP_DMA);
skb = __netdev_alloc_skb(bp->dev, RX_PKT_BUF_SZ, GFP_ATOMIC|GFP_DMA);
if (skb == NULL)
return -ENOMEM;
mapping = pci_map_single(bp->pdev, skb->data,
......@@ -684,11 +684,9 @@ static int b44_alloc_rx_skb(struct b44 *bp, int src_idx, u32 dest_idx_unmasked)
}
}
skb->dev = bp->dev;
skb_reserve(skb, bp->rx_offset);
rh = (struct rx_header *) skb->data;
skb_reserve(skb, RX_PKT_OFFSET);
rh = (struct rx_header *)
(skb->data - bp->rx_offset);
rh->len = 0;
rh->flags = 0;
......@@ -698,13 +696,13 @@ static int b44_alloc_rx_skb(struct b44 *bp, int src_idx, u32 dest_idx_unmasked)
if (src_map != NULL)
src_map->skb = NULL;
ctrl = (DESC_CTRL_LEN & (RX_PKT_BUF_SZ - bp->rx_offset));
ctrl = (DESC_CTRL_LEN & (RX_PKT_BUF_SZ - RX_PKT_OFFSET));
if (dest_idx == (B44_RX_RING_SIZE - 1))
ctrl |= DESC_CTRL_EOT;
dp = &bp->rx_ring[dest_idx];
dp->ctrl = cpu_to_le32(ctrl);
dp->addr = cpu_to_le32((u32) mapping + bp->rx_offset + bp->dma_offset);
dp->addr = cpu_to_le32((u32) mapping + RX_PKT_OFFSET + bp->dma_offset);
if (bp->flags & B44_FLAG_RX_RING_HACK)
b44_sync_dma_desc_for_device(bp->pdev, bp->rx_ring_dma,
......@@ -783,7 +781,7 @@ static int b44_rx(struct b44 *bp, int budget)
PCI_DMA_FROMDEVICE);
rh = (struct rx_header *) skb->data;
len = le16_to_cpu(rh->len);
if ((len > (RX_PKT_BUF_SZ - bp->rx_offset)) ||
if ((len > (RX_PKT_BUF_SZ - RX_PKT_OFFSET)) ||
(rh->flags & cpu_to_le16(RX_FLAG_ERRORS))) {
drop_it:
b44_recycle_rx(bp, cons, bp->rx_prod);
......@@ -815,8 +813,8 @@ static int b44_rx(struct b44 *bp, int budget)
pci_unmap_single(bp->pdev, map,
skb_size, PCI_DMA_FROMDEVICE);
/* Leave out rx_header */
skb_put(skb, len+bp->rx_offset);
skb_pull(skb,bp->rx_offset);
skb_put(skb, len + RX_PKT_OFFSET);
skb_pull(skb, RX_PKT_OFFSET);
} else {
struct sk_buff *copy_skb;
......@@ -828,7 +826,7 @@ static int b44_rx(struct b44 *bp, int budget)
skb_reserve(copy_skb, 2);
skb_put(copy_skb, len);
/* DMA sync done above, copy just the actual packet */
skb_copy_from_linear_data_offset(skb, bp->rx_offset,
skb_copy_from_linear_data_offset(skb, RX_PKT_OFFSET,
copy_skb->data, len);
skb = copy_skb;
}
......@@ -969,7 +967,6 @@ static void b44_tx_timeout(struct net_device *dev)
static int b44_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct b44 *bp = netdev_priv(dev);
struct sk_buff *bounce_skb;
int rc = NETDEV_TX_OK;
dma_addr_t mapping;
u32 len, entry, ctrl;
......@@ -987,12 +984,13 @@ static int b44_start_xmit(struct sk_buff *skb, struct net_device *dev)
mapping = pci_map_single(bp->pdev, skb->data, len, PCI_DMA_TODEVICE);
if (dma_mapping_error(mapping) || mapping + len > DMA_30BIT_MASK) {
struct sk_buff *bounce_skb;
/* Chip can't handle DMA to/from >1GB, use bounce buffer */
if (!dma_mapping_error(mapping))
pci_unmap_single(bp->pdev, mapping, len, PCI_DMA_TODEVICE);
bounce_skb = __dev_alloc_skb(TX_PKT_BUF_SZ,
GFP_ATOMIC|GFP_DMA);
bounce_skb = __dev_alloc_skb(len, GFP_ATOMIC | GFP_DMA);
if (!bounce_skb)
goto err_out;
......@@ -1001,13 +999,12 @@ static int b44_start_xmit(struct sk_buff *skb, struct net_device *dev)
if (dma_mapping_error(mapping) || mapping + len > DMA_30BIT_MASK) {
if (!dma_mapping_error(mapping))
pci_unmap_single(bp->pdev, mapping,
len, PCI_DMA_TODEVICE);
len, PCI_DMA_TODEVICE);
dev_kfree_skb_any(bounce_skb);
goto err_out;
}
skb_copy_from_linear_data(skb, skb_put(bounce_skb, len),
skb->len);
skb_copy_from_linear_data(skb, skb_put(bounce_skb, len), len);
dev_kfree_skb_any(skb);
skb = bounce_skb;
}
......@@ -1396,12 +1393,12 @@ static void b44_init_hw(struct b44 *bp, int reset_kind)
bw32(bp, B44_TX_WMARK, 56); /* XXX magic */
if (reset_kind == B44_PARTIAL_RESET) {
bw32(bp, B44_DMARX_CTRL, (DMARX_CTRL_ENABLE |
(bp->rx_offset << DMARX_CTRL_ROSHIFT)));
(RX_PKT_OFFSET << DMARX_CTRL_ROSHIFT)));
} else {
bw32(bp, B44_DMATX_CTRL, DMATX_CTRL_ENABLE);
bw32(bp, B44_DMATX_ADDR, bp->tx_ring_dma + bp->dma_offset);
bw32(bp, B44_DMARX_CTRL, (DMARX_CTRL_ENABLE |
(bp->rx_offset << DMARX_CTRL_ROSHIFT)));
(RX_PKT_OFFSET << DMARX_CTRL_ROSHIFT)));
bw32(bp, B44_DMARX_ADDR, bp->rx_ring_dma + bp->dma_offset);
bw32(bp, B44_DMARX_PTR, bp->rx_pending);
......@@ -2093,11 +2090,6 @@ static int __devinit b44_get_invariants(struct b44 *bp)
bp->phy_addr = eeprom[90] & 0x1f;
/* With this, plus the rx_header prepended to the data by the
* hardware, we'll land the ethernet header on a 2-byte boundary.
*/
bp->rx_offset = 30;
bp->imask = IMASK_DEF;
bp->core_unit = ssb_core_unit(bp);
......@@ -2348,11 +2340,11 @@ static int b44_resume(struct pci_dev *pdev)
netif_device_attach(bp->dev);
spin_unlock_irq(&bp->lock);
bp->timer.expires = jiffies + HZ;
add_timer(&bp->timer);
b44_enable_ints(bp);
netif_wake_queue(dev);
mod_timer(&bp->timer, jiffies + 1);
return 0;
}
......
......@@ -443,8 +443,6 @@ struct b44 {
#define B44_FLAG_TX_RING_HACK 0x40000000
#define B44_FLAG_WOL_ENABLE 0x80000000
u32 rx_offset;
u32 msg_enable;
struct timer_list timer;
......
......@@ -71,27 +71,29 @@ enum { /* adapter flags */
QUEUES_BOUND = (1 << 3),
};
struct fl_pg_chunk {
struct page *page;
void *va;
unsigned int offset;
};
struct rx_desc;
struct rx_sw_desc;
struct sge_fl_page {
struct skb_frag_struct frag;
unsigned char *va;
};
struct sge_fl { /* SGE per free-buffer list state */
unsigned int buf_size; /* size of each Rx buffer */
unsigned int credits; /* # of available Rx buffers */
unsigned int size; /* capacity of free list */
unsigned int cidx; /* consumer index */
unsigned int pidx; /* producer index */
unsigned int gen; /* free list generation */
unsigned int cntxt_id; /* SGE context id for the free list */
struct sge_fl_page page;
struct rx_desc *desc; /* address of HW Rx descriptor ring */
struct rx_sw_desc *sdesc; /* address of SW Rx descriptor ring */
dma_addr_t phys_addr; /* physical address of HW ring start */
unsigned long empty; /* # of times queue ran out of buffers */
struct sge_fl { /* SGE per free-buffer list state */
unsigned int buf_size; /* size of each Rx buffer */
unsigned int credits; /* # of available Rx buffers */
unsigned int size; /* capacity of free list */
unsigned int cidx; /* consumer index */
unsigned int pidx; /* producer index */
unsigned int gen; /* free list generation */
struct fl_pg_chunk pg_chunk;/* page chunk cache */
unsigned int use_pages; /* whether FL uses pages or sk_buffs */
struct rx_desc *desc; /* address of HW Rx descriptor ring */
struct rx_sw_desc *sdesc; /* address of SW Rx descriptor ring */
dma_addr_t phys_addr; /* physical address of HW ring start */
unsigned int cntxt_id; /* SGE context id for the free list */
unsigned long empty; /* # of times queue ran out of buffers */
unsigned long alloc_failed; /* # of times buffer allocation failed */
};
......
......@@ -101,6 +101,7 @@ enum {
TCB_SIZE = 128, /* TCB size */
NMTUS = 16, /* size of MTU table */
NCCTRL_WIN = 32, /* # of congestion control windows */
PROTO_SRAM_LINES = 128, /* size of TP sram */
};
#define MAX_RX_COALESCING_LEN 16224U
......@@ -123,6 +124,30 @@ enum { /* adapter interrupt-maintained statistics */
IRQ_NUM_STATS /* keep last */
};
enum {
TP_VERSION_MAJOR = 1,
TP_VERSION_MINOR = 0,
TP_VERSION_MICRO = 44
};
#define S_TP_VERSION_MAJOR 16
#define M_TP_VERSION_MAJOR 0xFF
#define V_TP_VERSION_MAJOR(x) ((x) << S_TP_VERSION_MAJOR)
#define G_TP_VERSION_MAJOR(x) \
(((x) >> S_TP_VERSION_MAJOR) & M_TP_VERSION_MAJOR)
#define S_TP_VERSION_MINOR 8
#define M_TP_VERSION_MINOR 0xFF
#define V_TP_VERSION_MINOR(x) ((x) << S_TP_VERSION_MINOR)
#define G_TP_VERSION_MINOR(x) \
(((x) >> S_TP_VERSION_MINOR) & M_TP_VERSION_MINOR)
#define S_TP_VERSION_MICRO 0
#define M_TP_VERSION_MICRO 0xFF
#define V_TP_VERSION_MICRO(x) ((x) << S_TP_VERSION_MICRO)
#define G_TP_VERSION_MICRO(x) \
(((x) >> S_TP_VERSION_MICRO) & M_TP_VERSION_MICRO)
enum {
SGE_QSETS = 8, /* # of SGE Tx/Rx/RspQ sets */
SGE_RXQ_PER_SET = 2, /* # of Rx queues per set */
......@@ -654,6 +679,9 @@ const struct adapter_info *t3_get_adapter_info(unsigned int board_id);
int t3_seeprom_read(struct adapter *adapter, u32 addr, u32 *data);
int t3_seeprom_write(struct adapter *adapter, u32 addr, u32 data);
int t3_seeprom_wp(struct adapter *adapter, int enable);
int t3_check_tpsram_version(struct adapter *adapter);
int t3_check_tpsram(struct adapter *adapter, u8 *tp_ram, unsigned int size);
int t3_set_proto_sram(struct adapter *adap, u8 *data);
int t3_read_flash(struct adapter *adapter, unsigned int addr,
unsigned int nwords, u32 *data, int byte_oriented);
int t3_load_fw(struct adapter *adapter, const u8 * fw_data, unsigned int size);
......
......@@ -2088,6 +2088,42 @@ static void cxgb_netpoll(struct net_device *dev)
}
#endif
#define TPSRAM_NAME "t3%c_protocol_sram-%d.%d.%d.bin"
int update_tpsram(struct adapter *adap)
{
const struct firmware *tpsram;
char buf[64];
struct device *dev = &adap->pdev->dev;
int ret;
char rev;
rev = adap->params.rev == T3_REV_B2 ? 'b' : 'a';
snprintf(buf, sizeof(buf), TPSRAM_NAME, rev,
TP_VERSION_MAJOR, TP_VERSION_MINOR, TP_VERSION_MICRO);
ret = request_firmware(&tpsram, buf, dev);
if (ret < 0) {
dev_err(dev, "could not load TP SRAM: unable to load %s\n",
buf);
return ret;
}
ret = t3_check_tpsram(adap, tpsram->data, tpsram->size);
if (ret)
goto release_tpsram;
ret = t3_set_proto_sram(adap, tpsram->data);
if (ret)
dev_err(dev, "loading protocol SRAM failed\n");
release_tpsram:
release_firmware(tpsram);
return ret;
}
/*
* Periodic accumulation of MAC statistics.
*/
......@@ -2437,6 +2473,13 @@ static int __devinit init_one(struct pci_dev *pdev,
goto out_free_dev;
}
err = t3_check_tpsram_version(adapter);
if (err == -EINVAL)
err = update_tpsram(adapter);
if (err)
goto out_free_dev;
/*
* The card is now ready to go. If any errors occur during device
* registration we do not fail the whole card but rather proceed only
......
......@@ -1160,6 +1160,8 @@
#define A_TP_MOD_CHANNEL_WEIGHT 0x434
#define A_TP_MOD_RATE_LIMIT 0x438
#define A_TP_PIO_ADDR 0x440
#define A_TP_PIO_DATA 0x444
......@@ -1214,6 +1216,15 @@
#define G_TXDROPCNTCH0RCVD(x) (((x) >> S_TXDROPCNTCH0RCVD) & \
M_TXDROPCNTCH0RCVD)
#define A_TP_PROXY_FLOW_CNTL 0x4b0
#define A_TP_EMBED_OP_FIELD0 0x4e8
#define A_TP_EMBED_OP_FIELD1 0x4ec
#define A_TP_EMBED_OP_FIELD2 0x4f0
#define A_TP_EMBED_OP_FIELD3 0x4f4
#define A_TP_EMBED_OP_FIELD4 0x4f8
#define A_TP_EMBED_OP_FIELD5 0x4fc
#define A_ULPRX_CTL 0x500
#define S_ROUND_ROBIN 4
......
This diff is collapsed.
......@@ -847,6 +847,64 @@ static int t3_write_flash(struct adapter *adapter, unsigned int addr,
return 0;
}
/**
* t3_check_tpsram_version - read the tp sram version
* @adapter: the adapter
*
* Reads the protocol sram version from serial eeprom.
*/
int t3_check_tpsram_version(struct adapter *adapter)
{
int ret;
u32 vers;
unsigned int major, minor;
/* Get version loaded in SRAM */
t3_write_reg(adapter, A_TP_EMBED_OP_FIELD0, 0);
ret = t3_wait_op_done(adapter, A_TP_EMBED_OP_FIELD0,
1, 1, 5, 1);
if (ret)
return ret;
vers = t3_read_reg(adapter, A_TP_EMBED_OP_FIELD1);
major = G_TP_VERSION_MAJOR(vers);
minor = G_TP_VERSION_MINOR(vers);
if (major == TP_VERSION_MAJOR && minor == TP_VERSION_MINOR)
return 0;
return -EINVAL;
}
/**
* t3_check_tpsram - check if provided protocol SRAM
* is compatible with this driver
* @adapter: the adapter
* @tp_sram: the firmware image to write
* @size: image size
*
* Checks if an adapter's tp sram is compatible with the driver.
* Returns 0 if the versions are compatible, a negative error otherwise.
*/
int t3_check_tpsram(struct adapter *adapter, u8 *tp_sram, unsigned int size)
{
u32 csum;
unsigned int i;
const u32 *p = (const u32 *)tp_sram;
/* Verify checksum */
for (csum = 0, i = 0; i < size / sizeof(csum); i++)
csum += ntohl(p[i]);
if (csum != 0xffffffff) {
CH_ERR(adapter, "corrupted protocol SRAM image, checksum %u\n",
csum);
return -EINVAL;
}
return 0;
}
enum fw_version_type {
FW_VERSION_N3,
FW_VERSION_T3
......@@ -921,7 +979,7 @@ static int t3_flash_erase_sectors(struct adapter *adapter, int start, int end)
/*
* t3_load_fw - download firmware
* @adapter: the adapter
* @fw_data: the firrware image to write
* @fw_data: the firmware image to write
* @size: image size
*
* Write the supplied firmware image to the card's serial flash.
......@@ -2362,7 +2420,7 @@ static void tp_config(struct adapter *adap, const struct tp_params *p)
F_TCPCHECKSUMOFFLOAD | V_IPTTL(64));
t3_write_reg(adap, A_TP_TCP_OPTIONS, V_MTUDEFAULT(576) |
F_MTUENABLE | V_WINDOWSCALEMODE(1) |
V_TIMESTAMPSMODE(1) | V_SACKMODE(1) | V_SACKRX(1));
V_TIMESTAMPSMODE(0) | V_SACKMODE(1) | V_SACKRX(1));
t3_write_reg(adap, A_TP_DACK_CONFIG, V_AUTOSTATE3(1) |
V_AUTOSTATE2(1) | V_AUTOSTATE1(0) |
V_BYTETHRESHOLD(16384) | V_MSSTHRESHOLD(2) |
......@@ -2371,16 +2429,18 @@ static void tp_config(struct adapter *adap, const struct tp_params *p)
F_IPV6ENABLE | F_NICMODE);
t3_write_reg(adap, A_TP_TX_RESOURCE_LIMIT, 0x18141814);
t3_write_reg(adap, A_TP_PARA_REG4, 0x5050105);
t3_set_reg_field(adap, A_TP_PARA_REG6,
adap->params.rev > 0 ? F_ENABLEESND : F_T3A_ENABLEESND,
0);
t3_set_reg_field(adap, A_TP_PARA_REG6, 0,
adap->params.rev > 0 ? F_ENABLEESND :
F_T3A_ENABLEESND);
t3_set_reg_field(adap, A_TP_PC_CONFIG,
F_ENABLEEPCMDAFULL | F_ENABLEOCSPIFULL,
F_TXDEFERENABLE | F_HEARBEATDACK | F_TXCONGESTIONMODE |
F_RXCONGESTIONMODE);
F_ENABLEEPCMDAFULL,
F_ENABLEOCSPIFULL |F_TXDEFERENABLE | F_HEARBEATDACK |
F_TXCONGESTIONMODE | F_RXCONGESTIONMODE);
t3_set_reg_field(adap, A_TP_PC_CONFIG2, F_CHDRAFULL, 0);
t3_write_reg(adap, A_TP_PROXY_FLOW_CNTL, 1080);
t3_write_reg(adap, A_TP_PROXY_FLOW_CNTL, 1000);
if (adap->params.rev > 0) {
tp_wr_indirect(adap, A_TP_EGRESS_CONFIG, F_REWRITEFORCETOSIZE);
t3_set_reg_field(adap, A_TP_PARA_REG3, F_TXPACEAUTO,
......@@ -2390,9 +2450,10 @@ static void tp_config(struct adapter *adap, const struct tp_params *p)
} else
t3_set_reg_field(adap, A_TP_PARA_REG3, 0, F_TXPACEFIXED);
t3_write_reg(adap, A_TP_TX_MOD_QUEUE_WEIGHT1, 0x12121212);
t3_write_reg(adap, A_TP_TX_MOD_QUEUE_WEIGHT0, 0x12121212);
t3_write_reg(adap, A_TP_MOD_CHANNEL_WEIGHT, 0x1212);
t3_write_reg(adap, A_TP_TX_MOD_QUEUE_WEIGHT1, 0);
t3_write_reg(adap, A_TP_TX_MOD_QUEUE_WEIGHT0, 0);
t3_write_reg(adap, A_TP_MOD_CHANNEL_WEIGHT, 0);
t3_write_reg(adap, A_TP_MOD_RATE_LIMIT, 0xf2200000);
}
/* Desired TP timer resolution in usec */
......@@ -2468,6 +2529,7 @@ int t3_tp_set_coalescing_size(struct adapter *adap, unsigned int size, int psh)
val |= F_RXCOALESCEENABLE;
if (psh)
val |= F_RXCOALESCEPSHEN;
size = min(MAX_RX_COALESCING_LEN, size);
t3_write_reg(adap, A_TP_PARA_REG2, V_RXCOALESCESIZE(size) |
V_MAXRXDATA(MAX_RX_COALESCING_LEN));
}
......@@ -2496,11 +2558,11 @@ static void __devinit init_mtus(unsigned short mtus[])
* it can accomodate max size TCP/IP headers when SACK and timestamps
* are enabled and still have at least 8 bytes of payload.
*/
mtus[0] = 88;
mtus[1] = 256;
mtus[2] = 512;
mtus[3] = 576;
mtus[4] = 808;
mtus[1] = 88;
mtus[1] = 88;
mtus[2] = 256;
mtus[3] = 512;
mtus[4] = 576;
mtus[5] = 1024;
mtus[6] = 1280;
mtus[7] = 1492;
......@@ -2682,6 +2744,34 @@ static void ulp_config(struct adapter *adap, const struct tp_params *p)
t3_write_reg(adap, A_ULPRX_TDDP_TAGMASK, 0xffffffff);
}
/**
* t3_set_proto_sram - set the contents of the protocol sram
* @adapter: the adapter
* @data: the protocol image
*
* Write the contents of the protocol SRAM.
*/
int t3_set_proto_sram(struct adapter *adap, u8 *data)
{
int i;
u32 *buf = (u32 *)data;
for (i = 0; i < PROTO_SRAM_LINES; i++) {
t3_write_reg(adap, A_TP_EMBED_OP_FIELD5, cpu_to_be32(*buf++));
t3_write_reg(adap, A_TP_EMBED_OP_FIELD4, cpu_to_be32(*buf++));
t3_write_reg(adap, A_TP_EMBED_OP_FIELD3, cpu_to_be32(*buf++));
t3_write_reg(adap, A_TP_EMBED_OP_FIELD2, cpu_to_be32(*buf++));
t3_write_reg(adap, A_TP_EMBED_OP_FIELD1, cpu_to_be32(*buf++));
t3_write_reg(adap, A_TP_EMBED_OP_FIELD0, i << 1 | 1 << 31);
if (t3_wait_op_done(adap, A_TP_EMBED_OP_FIELD0, 1, 1, 5, 1))
return -EIO;
}
t3_write_reg(adap, A_TP_EMBED_OP_FIELD0, 0);
return 0;
}
void t3_config_trace_filter(struct adapter *adapter,
const struct trace_params *tp, int filter_index,
int invert, int enable)
......@@ -2802,7 +2892,7 @@ static void init_hw_for_avail_ports(struct adapter *adap, int nports)
t3_set_reg_field(adap, A_ULPTX_CONFIG, F_CFG_RR_ARB, 0);
t3_write_reg(adap, A_MPS_CFG, F_TPRXPORTEN | F_TPTXPORT0EN |
F_PORT0ACTIVE | F_ENFORCEPKT);
t3_write_reg(adap, A_PM1_TX_CFG, 0xc000c000);
t3_write_reg(adap, A_PM1_TX_CFG, 0xffffffff);
} else {
t3_set_reg_field(adap, A_ULPRX_CTL, 0, F_ROUND_ROBIN);
t3_set_reg_field(adap, A_ULPTX_CONFIG, 0, F_CFG_RR_ARB);
......@@ -3097,7 +3187,7 @@ int t3_init_hw(struct adapter *adapter, u32 fw_params)
else
t3_set_reg_field(adapter, A_PCIX_CFG, 0, F_CLIDECEN);
t3_write_reg(adapter, A_PM1_RX_CFG, 0xf000f000);
t3_write_reg(adapter, A_PM1_RX_CFG, 0xffffffff);
init_hw_for_avail_ports(adapter, adapter->params.nports);
t3_sge_init(adapter, &adapter->params.sge);
......
......@@ -39,6 +39,6 @@
/* Firmware version */
#define FW_VERSION_MAJOR 4
#define FW_VERSION_MINOR 0
#define FW_VERSION_MINOR 1
#define FW_VERSION_MICRO 0
#endif /* __CHELSIO_VERSION_H */
......@@ -39,7 +39,7 @@
#include <asm/io.h>
#define DRV_NAME "ehea"
#define DRV_VERSION "EHEA_0064"
#define DRV_VERSION "EHEA_0065"
#define EHEA_MSG_DEFAULT (NETIF_MSG_LINK | NETIF_MSG_TIMER \
| NETIF_MSG_RX_ERR | NETIF_MSG_TX_ERR)
......@@ -136,10 +136,10 @@ void ehea_dump(void *adr, int len, char *msg);
(0xffffffffffffffffULL >> ((64 - (mask)) & 0xffff))
#define EHEA_BMASK_SET(mask, value) \
((EHEA_BMASK_MASK(mask) & ((u64)(value))) << EHEA_BMASK_SHIFTPOS(mask))
((EHEA_BMASK_MASK(mask) & ((u64)(value))) << EHEA_BMASK_SHIFTPOS(mask))
#define EHEA_BMASK_GET(mask, value) \
(EHEA_BMASK_MASK(mask) & (((u64)(value)) >> EHEA_BMASK_SHIFTPOS(mask)))
(EHEA_BMASK_MASK(mask) & (((u64)(value)) >> EHEA_BMASK_SHIFTPOS(mask)))
/*
* Generic ehea page
......@@ -190,7 +190,7 @@ struct ehea_av;
* Queue attributes passed to ehea_create_qp()
*/
struct ehea_qp_init_attr {
/* input parameter */
/* input parameter */
u32 qp_token; /* queue token */
u8 low_lat_rq1;
u8 signalingtype; /* cqe generation flag */
......@@ -212,7 +212,7 @@ struct ehea_qp_init_attr {
u64 recv_cq_handle;
u64 aff_eq_handle;
/* output parameter */
/* output parameter */
u32 qp_nr;
u16 act_nr_send_wqes;
u16 act_nr_rwqes_rq1;
......@@ -279,12 +279,12 @@ struct ehea_qp {
* Completion Queue attributes
*/
struct ehea_cq_attr {
/* input parameter */
/* input parameter */
u32 max_nr_of_cqes;
u32 cq_token;
u64 eq_handle;
/* output parameter */
/* output parameter */
u32 act_nr_of_cqes;
u32 nr_pages;
};
......
......@@ -211,34 +211,34 @@ static inline void epa_store_acc(struct h_epa epa, u32 offset, u64 value)
}
#define epa_store_eq(epa, offset, value)\
epa_store(epa, EQTEMM_OFFSET(offset), value)
epa_store(epa, EQTEMM_OFFSET(offset), value)
#define epa_load_eq(epa, offset)\
epa_load(epa, EQTEMM_OFFSET(offset))
epa_load(epa, EQTEMM_OFFSET(offset))
#define epa_store_cq(epa, offset, value)\
epa_store(epa, CQTEMM_OFFSET(offset), value)
epa_store(epa, CQTEMM_OFFSET(offset), value)
#define epa_load_cq(epa, offset)\
epa_load(epa, CQTEMM_OFFSET(offset))
epa_load(epa, CQTEMM_OFFSET(offset))
#define epa_store_qp(epa, offset, value)\
epa_store(epa, QPTEMM_OFFSET(offset), value)
epa_store(epa, QPTEMM_OFFSET(offset), value)
#define epa_load_qp(epa, offset)\
epa_load(epa, QPTEMM_OFFSET(offset))
epa_load(epa, QPTEMM_OFFSET(offset))
#define epa_store_qped(epa, offset, value)\
epa_store(epa, QPEDMM_OFFSET(offset), value)
epa_store(epa, QPEDMM_OFFSET(offset), value)
#define epa_load_qped(epa, offset)\
epa_load(epa, QPEDMM_OFFSET(offset))
epa_load(epa, QPEDMM_OFFSET(offset))
#define epa_store_mrmw(epa, offset, value)\
epa_store(epa, MRMWMM_OFFSET(offset), value)
epa_store(epa, MRMWMM_OFFSET(offset), value)
#define epa_load_mrmw(epa, offset)\
epa_load(epa, MRMWMM_OFFSET(offset))
epa_load(epa, MRMWMM_OFFSET(offset))
#define epa_store_base(epa, offset, value)\
epa_store(epa, HCAGR_OFFSET(offset), value)
epa_store(epa, HCAGR_OFFSET(offset), value)
#define epa_load_base(epa, offset)\
epa_load(epa, HCAGR_OFFSET(offset))
epa_load(epa, HCAGR_OFFSET(offset))
static inline void ehea_update_sqa(struct ehea_qp *qp, u16 nr_wqes)
{
......
......@@ -81,7 +81,7 @@ MODULE_PARM_DESC(use_mcs, " 0:NAPI, 1:Multiple receive queues, Default = 1 ");
static int port_name_cnt = 0;
static int __devinit ehea_probe_adapter(struct ibmebus_dev *dev,
const struct of_device_id *id);
const struct of_device_id *id);
static int __devexit ehea_remove(struct ibmebus_dev *dev);
......@@ -236,7 +236,7 @@ static int ehea_refill_rq_def(struct ehea_port_res *pr,
rwqe = ehea_get_next_rwqe(qp, rq_nr);
rwqe->wr_id = EHEA_BMASK_SET(EHEA_WR_ID_TYPE, wqe_type)
| EHEA_BMASK_SET(EHEA_WR_ID_INDEX, index);
| EHEA_BMASK_SET(EHEA_WR_ID_INDEX, index);
rwqe->sg_list[0].l_key = pr->recv_mr.lkey;
rwqe->sg_list[0].vaddr = (u64)skb->data;
rwqe->sg_list[0].len = packet_size;
......@@ -427,7 +427,7 @@ static struct ehea_cqe *ehea_proc_rwqes(struct net_device *dev,
break;
}
skb_copy_to_linear_data(skb, ((char*)cqe) + 64,
cqe->num_bytes_transfered - 4);
cqe->num_bytes_transfered - 4);
ehea_fill_skb(port->netdev, skb, cqe);
} else if (rq == 2) { /* RQ2 */
skb = get_skb_by_index(skb_arr_rq2,
......@@ -618,7 +618,7 @@ static struct ehea_port *ehea_get_port(struct ehea_adapter *adapter,
for (i = 0; i < EHEA_MAX_PORTS; i++)
if (adapter->port[i])
if (adapter->port[i]->logical_port_id == logical_port)
if (adapter->port[i]->logical_port_id == logical_port)
return adapter->port[i];
return NULL;
}
......@@ -1695,6 +1695,7 @@ static void ehea_xmit2(struct sk_buff *skb, struct net_device *dev,
{
if (skb->protocol == htons(ETH_P_IP)) {
const struct iphdr *iph = ip_hdr(skb);
/* IPv4 */
swqe->tx_control |= EHEA_SWQE_CRC
| EHEA_SWQE_IP_CHECKSUM
......@@ -1705,13 +1706,12 @@ static void ehea_xmit2(struct sk_buff *skb, struct net_device *dev,
write_ip_start_end(swqe, skb);
if (iph->protocol == IPPROTO_UDP) {
if ((iph->frag_off & IP_MF) ||
(iph->frag_off & IP_OFFSET))
if ((iph->frag_off & IP_MF)
|| (iph->frag_off & IP_OFFSET))
/* IP fragment, so don't change cs */
swqe->tx_control &= ~EHEA_SWQE_TCP_CHECKSUM;
else
write_udp_offset_end(swqe, skb);
} else if (iph->protocol == IPPROTO_TCP) {
write_tcp_offset_end(swqe, skb);
}
......@@ -1739,6 +1739,7 @@ static void ehea_xmit3(struct sk_buff *skb, struct net_device *dev,
if (skb->protocol == htons(ETH_P_IP)) {
const struct iphdr *iph = ip_hdr(skb);
/* IPv4 */
write_ip_start_end(swqe, skb);
......@@ -1751,8 +1752,8 @@ static void ehea_xmit3(struct sk_buff *skb, struct net_device *dev,
write_tcp_offset_end(swqe, skb);
} else if (iph->protocol == IPPROTO_UDP) {
if ((iph->frag_off & IP_MF) ||
(iph->frag_off & IP_OFFSET))
if ((iph->frag_off & IP_MF)
|| (iph->frag_off & IP_OFFSET))
/* IP fragment, so don't change cs */
swqe->tx_control |= EHEA_SWQE_CRC
| EHEA_SWQE_IMM_DATA_PRESENT;
......@@ -2407,7 +2408,7 @@ static void __devinit logical_port_release(struct device *dev)
}
static int ehea_driver_sysfs_add(struct device *dev,
struct device_driver *driver)
struct device_driver *driver)
{
int ret;
......@@ -2424,7 +2425,7 @@ static int ehea_driver_sysfs_add(struct device *dev,
}
static void ehea_driver_sysfs_remove(struct device *dev,
struct device_driver *driver)
struct device_driver *driver)
{
struct device_driver *drv = driver;
......@@ -2453,7 +2454,7 @@ static struct device *ehea_register_port(struct ehea_port *port,
}
ret = device_create_file(&port->ofdev.dev, &dev_attr_log_port_id);
if (ret) {
if (ret) {
ehea_error("failed to register attributes, ret=%d", ret);
goto out_unreg_of_dev;
}
......@@ -2601,6 +2602,7 @@ static int ehea_setup_ports(struct ehea_adapter *adapter)
{
struct device_node *lhea_dn;
struct device_node *eth_dn = NULL;
const u32 *dn_log_port_id;
int i = 0;
......@@ -2608,7 +2610,7 @@ static int ehea_setup_ports(struct ehea_adapter *adapter)
while ((eth_dn = of_get_next_child(lhea_dn, eth_dn))) {
dn_log_port_id = of_get_property(eth_dn, "ibm,hea-port-no",
NULL);
NULL);
if (!dn_log_port_id) {
ehea_error("bad device node: eth_dn name=%s",
eth_dn->full_name);
......@@ -2648,7 +2650,7 @@ static struct device_node *ehea_get_eth_dn(struct ehea_adapter *adapter,
while ((eth_dn = of_get_next_child(lhea_dn, eth_dn))) {
dn_log_port_id = of_get_property(eth_dn, "ibm,hea-port-no",
NULL);
NULL);
if (dn_log_port_id)
if (*dn_log_port_id == logical_port_id)
return eth_dn;
......@@ -2789,7 +2791,7 @@ static int __devinit ehea_probe_adapter(struct ibmebus_dev *dev,
adapter->ebus_dev = dev;
adapter_handle = of_get_property(dev->ofdev.node, "ibm,hea-handle",
NULL);
NULL);
if (adapter_handle)
adapter->handle = *adapter_handle;
......
......@@ -211,7 +211,7 @@ u64 ehea_destroy_cq_res(struct ehea_cq *cq, u64 force)
u64 hret;
u64 adapter_handle = cq->adapter->handle;
/* deregister all previous registered pages */
/* deregister all previous registered pages */
hret = ehea_h_free_resource(adapter_handle, cq->fw_handle, force);
if (hret != H_SUCCESS)
return hret;
......@@ -362,7 +362,7 @@ int ehea_destroy_eq(struct ehea_eq *eq)
if (hret != H_SUCCESS) {
ehea_error("destroy EQ failed");
return -EIO;
}
}
return 0;
}
......@@ -507,44 +507,44 @@ struct ehea_qp *ehea_create_qp(struct ehea_adapter *adapter,
u64 ehea_destroy_qp_res(struct ehea_qp *qp, u64 force)
{
u64 hret;
struct ehea_qp_init_attr *qp_attr = &qp->init_attr;
u64 hret;
struct ehea_qp_init_attr *qp_attr = &qp->init_attr;
ehea_h_disable_and_get_hea(qp->adapter->handle, qp->fw_handle);
hret = ehea_h_free_resource(qp->adapter->handle, qp->fw_handle, force);
if (hret != H_SUCCESS)
return hret;
ehea_h_disable_and_get_hea(qp->adapter->handle, qp->fw_handle);
hret = ehea_h_free_resource(qp->adapter->handle, qp->fw_handle, force);
if (hret != H_SUCCESS)
return hret;
hw_queue_dtor(&qp->hw_squeue);
hw_queue_dtor(&qp->hw_rqueue1);
hw_queue_dtor(&qp->hw_squeue);
hw_queue_dtor(&qp->hw_rqueue1);
if (qp_attr->rq_count > 1)
hw_queue_dtor(&qp->hw_rqueue2);
if (qp_attr->rq_count > 2)
hw_queue_dtor(&qp->hw_rqueue3);
kfree(qp);
if (qp_attr->rq_count > 1)
hw_queue_dtor(&qp->hw_rqueue2);
if (qp_attr->rq_count > 2)
hw_queue_dtor(&qp->hw_rqueue3);
kfree(qp);
return hret;
return hret;
}
int ehea_destroy_qp(struct ehea_qp *qp)
{
u64 hret;
if (!qp)
return 0;
u64 hret;
if (!qp)
return 0;
if ((hret = ehea_destroy_qp_res(qp, NORMAL_FREE)) == H_R_STATE) {
ehea_error_data(qp->adapter, qp->fw_handle);
hret = ehea_destroy_qp_res(qp, FORCE_FREE);
}
if ((hret = ehea_destroy_qp_res(qp, NORMAL_FREE)) == H_R_STATE) {
ehea_error_data(qp->adapter, qp->fw_handle);
hret = ehea_destroy_qp_res(qp, FORCE_FREE);
}
if (hret != H_SUCCESS) {
ehea_error("destroy QP failed");
return -EIO;
}
if (hret != H_SUCCESS) {
ehea_error("destroy QP failed");
return -EIO;
}
return 0;
return 0;
}
int ehea_reg_kernel_mr(struct ehea_adapter *adapter, struct ehea_mr *mr)
......
config FEC_8XX
tristate "Motorola 8xx FEC driver"
depends on NET_ETHERNET && 8xx
depends on 8XX
select MII
config FEC_8XX_GENERIC_PHY
......
config FS_ENET
tristate "Freescale Ethernet Driver"
depends on NET_ETHERNET && (CPM1 || CPM2)
depends on CPM1 || CPM2
select MII
config FS_ENET_HAS_SCC
......
......@@ -130,6 +130,9 @@ static int gfar_remove(struct platform_device *pdev);
static void free_skb_resources(struct gfar_private *priv);
static void gfar_set_multi(struct net_device *dev);
static void gfar_set_hash_for_addr(struct net_device *dev, u8 *addr);
static void gfar_configure_serdes(struct net_device *dev);
extern int gfar_local_mdio_write(struct gfar_mii *regs, int mii_id, int regnum, u16 value);
extern int gfar_local_mdio_read(struct gfar_mii *regs, int mii_id, int regnum);
#ifdef CONFIG_GFAR_NAPI
static int gfar_poll(struct net_device *dev, int *budget);
#endif
......@@ -451,6 +454,9 @@ static int init_phy(struct net_device *dev)
phydev = phy_connect(dev, phy_id, &adjust_link, 0, interface);
if (interface == PHY_INTERFACE_MODE_SGMII)
gfar_configure_serdes(dev);
if (IS_ERR(phydev)) {
printk(KERN_ERR "%s: Could not attach to PHY\n", dev->name);
return PTR_ERR(phydev);
......@@ -465,6 +471,27 @@ static int init_phy(struct net_device *dev)
return 0;
}
static void gfar_configure_serdes(struct net_device *dev)
{
struct gfar_private *priv = netdev_priv(dev);
struct gfar_mii __iomem *regs =
(void __iomem *)&priv->regs->gfar_mii_regs;
/* Initialise TBI i/f to communicate with serdes (lynx phy) */
/* Single clk mode, mii mode off(for aerdes communication) */
gfar_local_mdio_write(regs, TBIPA_VALUE, MII_TBICON, TBICON_CLK_SELECT);
/* Supported pause and full-duplex, no half-duplex */
gfar_local_mdio_write(regs, TBIPA_VALUE, MII_ADVERTISE,
ADVERTISE_1000XFULL | ADVERTISE_1000XPAUSE |
ADVERTISE_1000XPSE_ASYM);
/* ANEG enable, restart ANEG, full duplex mode, speed[1] set */
gfar_local_mdio_write(regs, TBIPA_VALUE, MII_BMCR, BMCR_ANENABLE |
BMCR_ANRESTART | BMCR_FULLDPLX | BMCR_SPEED1000);
}
static void init_registers(struct net_device *dev)
{
struct gfar_private *priv = netdev_priv(dev);
......
......@@ -136,6 +136,12 @@ extern const char gfar_driver_version[];
#define MIIMCFG_RESET 0x80000000
#define MIIMIND_BUSY 0x00000001
/* TBI register addresses */
#define MII_TBICON 0x11
/* TBICON register bit fields */
#define TBICON_CLK_SELECT 0x0020
/* MAC register bits */
#define MACCFG1_SOFT_RESET 0x80000000
#define MACCFG1_RESET_RX_MC 0x00080000
......
......@@ -43,13 +43,18 @@
#include "gianfar.h"
#include "gianfar_mii.h"
/* Write value to the PHY at mii_id at register regnum,
* on the bus, waiting until the write is done before returning.
* All PHY configuration is done through the TSEC1 MIIM regs */
int gfar_mdio_write(struct mii_bus *bus, int mii_id, int regnum, u16 value)
/*
* Write value to the PHY at mii_id at register regnum,
* on the bus attached to the local interface, which may be different from the
* generic mdio bus (tied to a single interface), waiting until the write is
* done before returning. This is helpful in programming interfaces like
* the TBI which control interfaces like onchip SERDES and are always tied to
* the local mdio pins, which may not be the same as system mdio bus, used for
* controlling the external PHYs, for example.
*/
int gfar_local_mdio_write(struct gfar_mii *regs, int mii_id,
int regnum, u16 value)
{
struct gfar_mii __iomem *regs = (void __iomem *)bus->priv;
/* Set the PHY address and the register address we want to write */
gfar_write(&regs->miimadd, (mii_id << 8) | regnum);
......@@ -63,12 +68,19 @@ int gfar_mdio_write(struct mii_bus *bus, int mii_id, int regnum, u16 value)
return 0;
}
/* Read the bus for PHY at addr mii_id, register regnum, and
* return the value. Clears miimcom first. All PHY
* configuration has to be done through the TSEC1 MIIM regs */
int gfar_mdio_read(struct mii_bus *bus, int mii_id, int regnum)
/*
* Read the bus for PHY at addr mii_id, register regnum, and
* return the value. Clears miimcom first. All PHY operation
* done on the bus attached to the local interface,
* which may be different from the generic mdio bus
* This is helpful in programming interfaces like
* the TBI which, inturn, control interfaces like onchip SERDES
* and are always tied to the local mdio pins, which may not be the
* same as system mdio bus, used for controlling the external PHYs, for eg.
*/
int gfar_local_mdio_read(struct gfar_mii *regs, int mii_id, int regnum)
{
struct gfar_mii __iomem *regs = (void __iomem *)bus->priv;
u16 value;
/* Set the PHY address and the register address we want to read */
......@@ -88,6 +100,27 @@ int gfar_mdio_read(struct mii_bus *bus, int mii_id, int regnum)
return value;
}
/* Write value to the PHY at mii_id at register regnum,
* on the bus, waiting until the write is done before returning.
* All PHY configuration is done through the TSEC1 MIIM regs */
int gfar_mdio_write(struct mii_bus *bus, int mii_id, int regnum, u16 value)
{
struct gfar_mii __iomem *regs = (void __iomem *)bus->priv;
/* Write to the local MII regs */
return(gfar_local_mdio_write(regs, mii_id, regnum, value));
}
/* Read the bus for PHY at addr mii_id, register regnum, and
* return the value. Clears miimcom first. All PHY
* configuration has to be done through the TSEC1 MIIM regs */
int gfar_mdio_read(struct mii_bus *bus, int mii_id, int regnum)
{
struct gfar_mii __iomem *regs = (void __iomem *)bus->priv;
/* Read the local MII regs */
return(gfar_local_mdio_read(regs, mii_id, regnum));
}
/* Reset the MIIM registers, and wait for the bus to free */
int gfar_mdio_reset(struct mii_bus *bus)
......
This diff is collapsed.
This diff is collapsed.
......@@ -113,8 +113,7 @@ int mlx4_qp_modify(struct mlx4_dev *dev, struct mlx4_mtt *mtt,
struct mlx4_cmd_mailbox *mailbox;
int ret = 0;
if (cur_state < 0 || cur_state >= MLX4_QP_NUM_STATE ||
new_state < 0 || cur_state >= MLX4_QP_NUM_STATE ||
if (cur_state >= MLX4_QP_NUM_STATE || cur_state >= MLX4_QP_NUM_STATE ||
!op[cur_state][new_state])
return -EINVAL;
......
......@@ -724,7 +724,7 @@ int netxen_niu_disable_gbe_port(struct netxen_adapter *adapter)
__u32 mac_cfg0;
u32 port = physical_port[adapter->portnum];
if ((port < 0) || (port > NETXEN_NIU_MAX_GBE_PORTS))
if (port > NETXEN_NIU_MAX_GBE_PORTS)
return -EINVAL;
mac_cfg0 = 0;
netxen_gb_soft_reset(mac_cfg0);
......@@ -757,7 +757,7 @@ int netxen_niu_set_promiscuous_mode(struct netxen_adapter *adapter,
__u32 reg;
u32 port = physical_port[adapter->portnum];
if ((port < 0) || (port > NETXEN_NIU_MAX_GBE_PORTS))
if (port > NETXEN_NIU_MAX_GBE_PORTS)
return -EINVAL;
/* save previous contents */
......@@ -894,7 +894,7 @@ int netxen_niu_xg_set_promiscuous_mode(struct netxen_adapter *adapter,
__u32 reg;
u32 port = physical_port[adapter->portnum];
if ((port < 0) || (port > NETXEN_NIU_MAX_XG_PORTS))
if (port > NETXEN_NIU_MAX_XG_PORTS)
return -EINVAL;
if (netxen_nic_hw_read_wx(adapter,
......
......@@ -755,7 +755,7 @@ static int pasemi_mac_open(struct net_device *dev)
flags |= PAS_MAC_CFG_PCFG_TSR_1G | PAS_MAC_CFG_PCFG_SPD_1G;
pci_write_config_dword(mac->iob_pdev, PAS_IOB_DMA_RXCH_CFG(mac->dma_rxch),
PAS_IOB_DMA_RXCH_CFG_CNTTH(1));
PAS_IOB_DMA_RXCH_CFG_CNTTH(0));
pci_write_config_dword(mac->iob_pdev, PAS_IOB_DMA_TXCH_CFG(mac->dma_txch),
PAS_IOB_DMA_TXCH_CFG_CNTTH(32));
......
......@@ -521,6 +521,7 @@ static void mdio_write(kio_addr_t addr, int phy_id, int loc, int value)
static int axnet_open(struct net_device *dev)
{
int ret;
axnet_dev_t *info = PRIV(dev);
struct pcmcia_device *link = info->p_dev;
......@@ -529,9 +530,11 @@ static int axnet_open(struct net_device *dev)
if (!pcmcia_dev_present(link))
return -ENODEV;
link->open++;
ret = request_irq(dev->irq, ei_irq_wrapper, IRQF_SHARED, "axnet_cs", dev);
if (ret)
return ret;
request_irq(dev->irq, ei_irq_wrapper, IRQF_SHARED, "axnet_cs", dev);
link->open++;
info->link_status = 0x00;
init_timer(&info->watchdog);
......
......@@ -109,7 +109,7 @@ static const struct ethtool_ops netdev_ethtool_ops;
card type
*/
typedef enum { MBH10302, MBH10304, TDK, CONTEC, LA501, UNGERMANN,
XXX10304
XXX10304, NEC, KME
} cardtype_t;
/*
......@@ -374,6 +374,18 @@ static int fmvj18x_config(struct pcmcia_device *link)
link->io.NumPorts2 = 8;
}
break;
case MANFID_NEC:
cardtype = NEC; /* MultiFunction Card */
link->conf.ConfigBase = 0x800;
link->conf.ConfigIndex = 0x47;
link->io.NumPorts2 = 8;
break;
case MANFID_KME:
cardtype = KME; /* MultiFunction Card */
link->conf.ConfigBase = 0x800;
link->conf.ConfigIndex = 0x47;
link->io.NumPorts2 = 8;
break;
case MANFID_CONTEC:
cardtype = CONTEC;
break;
......@@ -450,6 +462,8 @@ static int fmvj18x_config(struct pcmcia_device *link)
case TDK:
case LA501:
case CONTEC:
case NEC:
case KME:
tuple.DesiredTuple = CISTPL_FUNCE;
tuple.TupleOffset = 0;
CS_CHECK(GetFirstTuple, pcmcia_get_first_tuple(link, &tuple));
......@@ -469,6 +483,10 @@ static int fmvj18x_config(struct pcmcia_device *link)
card_name = "TDK LAK-CD021";
} else if( cardtype == LA501 ) {
card_name = "LA501";
} else if( cardtype == NEC ) {
card_name = "PK-UG-J001";
} else if( cardtype == KME ) {
card_name = "Panasonic";
} else {
card_name = "C-NET(PC)C";
}
......@@ -678,8 +696,11 @@ static struct pcmcia_device_id fmvj18x_ids[] = {
PCMCIA_DEVICE_PROD_ID1("PCMCIA MBH10302", 0x8f4005da),
PCMCIA_DEVICE_PROD_ID1("UBKK,V2.0", 0x90888080),
PCMCIA_PFC_DEVICE_PROD_ID12(0, "TDK", "GlobalNetworker 3410/3412", 0x1eae9475, 0xd9a93bed),
PCMCIA_PFC_DEVICE_PROD_ID12(0, "NEC", "PK-UG-J001" ,0x18df0ba0 ,0x831b1064),
PCMCIA_PFC_DEVICE_MANF_CARD(0, 0x0105, 0x0d0a),
PCMCIA_PFC_DEVICE_MANF_CARD(0, 0x0105, 0x0e0a),
PCMCIA_PFC_DEVICE_MANF_CARD(0, 0x0032, 0x0a05),
PCMCIA_PFC_DEVICE_MANF_CARD(0, 0x0032, 0x1101),
PCMCIA_DEVICE_NULL,
};
MODULE_DEVICE_TABLE(pcmcia, fmvj18x_ids);
......
......@@ -960,6 +960,7 @@ static void mii_phy_probe(struct net_device *dev)
static int pcnet_open(struct net_device *dev)
{
int ret;
pcnet_dev_t *info = PRIV(dev);
struct pcmcia_device *link = info->p_dev;
......@@ -968,10 +969,12 @@ static int pcnet_open(struct net_device *dev)
if (!pcmcia_dev_present(link))
return -ENODEV;
link->open++;
set_misc_reg(dev);
request_irq(dev->irq, ei_irq_wrapper, IRQF_SHARED, dev_info, dev);
ret = request_irq(dev->irq, ei_irq_wrapper, IRQF_SHARED, dev_info, dev);
if (ret)
return ret;
link->open++;
info->phy_id = info->eth_phy;
info->link_status = 0x00;
......@@ -1552,6 +1555,7 @@ static struct pcmcia_device_id pcnet_ids[] = {
PCMCIA_PFC_DEVICE_PROD_ID12(0, "Grey Cell", "GCS3000", 0x2a151fac, 0x48b932ae),
PCMCIA_PFC_DEVICE_PROD_ID12(0, "Linksys", "EtherFast 10&100 + 56K PC Card (PCMLM56)", 0x0733cc81, 0xb3765033),
PCMCIA_PFC_DEVICE_PROD_ID12(0, "LINKSYS", "PCMLM336", 0xf7cb0b07, 0x7a821b58),
PCMCIA_PFC_DEVICE_PROD_ID12(0, "MICRO RESEARCH", "COMBO-L/M-336", 0xb2ced065, 0x3ced0555),
PCMCIA_PFC_DEVICE_PROD_ID12(0, "PCMCIAs", "ComboCard", 0xdcfe12d3, 0xcd8906cc),
PCMCIA_PFC_DEVICE_PROD_ID12(0, "PCMCIAs", "LanModem", 0xdcfe12d3, 0xc67c648f),
PCMCIA_MFC_DEVICE_PROD_ID12(0, "IBM", "Home and Away 28.8 PC Card ", 0xb569a6e5, 0x5bd4ff2c),
......
......@@ -55,6 +55,11 @@ config BROADCOM_PHY
---help---
Currently supports the BCM5411, BCM5421 and BCM5461 PHYs.
config ICPLUS_PHY
tristate "Drivers for ICPlus PHYs"
---help---
Currently supports the IP175C PHY.
config FIXED_PHY
tristate "Drivers for PHY emulation on fixed speed/link"
---help---
......
......@@ -11,4 +11,5 @@ obj-$(CONFIG_QSEMI_PHY) += qsemi.o
obj-$(CONFIG_SMSC_PHY) += smsc.o
obj-$(CONFIG_VITESSE_PHY) += vitesse.o
obj-$(CONFIG_BROADCOM_PHY) += broadcom.o
obj-$(CONFIG_ICPLUS_PHY) += icplus.o
obj-$(CONFIG_FIXED_PHY) += fixed.o
/*
* Driver for ICPlus PHYs
*
* Copyright (c) 2007 Freescale Semiconductor, Inc.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*
*/
#include <linux/kernel.h>
#include <linux/string.h>
#include <linux/errno.h>
#include <linux/unistd.h>
#include <linux/slab.h>
#include <linux/interrupt.h>
#include <linux/init.h>
#include <linux/delay.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/skbuff.h>
#include <linux/spinlock.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/mii.h>
#include <linux/ethtool.h>
#include <linux/phy.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/uaccess.h>
MODULE_DESCRIPTION("ICPlus IP175C PHY driver");
MODULE_AUTHOR("Michael Barkowski");
MODULE_LICENSE("GPL");
static int ip175c_config_init(struct phy_device *phydev)
{
int err, i;
static int full_reset_performed = 0;
if (full_reset_performed == 0) {
/* master reset */
err = phydev->bus->write(phydev->bus, 30, 0, 0x175c);
if (err < 0)
return err;
/* ensure no bus delays overlap reset period */
err = phydev->bus->read(phydev->bus, 30, 0);
/* data sheet specifies reset period is 2 msec */
mdelay(2);
/* enable IP175C mode */
err = phydev->bus->write(phydev->bus, 29, 31, 0x175c);
if (err < 0)
return err;
/* Set MII0 speed and duplex (in PHY mode) */
err = phydev->bus->write(phydev->bus, 29, 22, 0x420);
if (err < 0)
return err;
/* reset switch ports */
for (i = 0; i < 5; i++) {
err = phydev->bus->write(phydev->bus, i,
MII_BMCR, BMCR_RESET);
if (err < 0)
return err;
}
for (i = 0; i < 5; i++)
err = phydev->bus->read(phydev->bus, i, MII_BMCR);
mdelay(2);
full_reset_performed = 1;
}
if (phydev->addr != 4) {
phydev->state = PHY_RUNNING;
phydev->speed = SPEED_100;
phydev->duplex = DUPLEX_FULL;
phydev->link = 1;
netif_carrier_on(phydev->attached_dev);
}
return 0;
}
static int ip175c_read_status(struct phy_device *phydev)
{
if (phydev->addr == 4) /* WAN port */
genphy_read_status(phydev);
else
/* Don't need to read status for switch ports */
phydev->irq = PHY_IGNORE_INTERRUPT;
return 0;
}
static int ip175c_config_aneg(struct phy_device *phydev)
{
if (phydev->addr == 4) /* WAN port */
genphy_config_aneg(phydev);
return 0;
}
static struct phy_driver ip175c_driver = {
.phy_id = 0x02430d80,
.name = "ICPlus IP175C",
.phy_id_mask = 0x0ffffff0,
.features = PHY_BASIC_FEATURES,
.config_init = &ip175c_config_init,
.config_aneg = &ip175c_config_aneg,
.read_status = &ip175c_read_status,
.driver = { .owner = THIS_MODULE,},
};
static int __init ip175c_init(void)
{
return phy_driver_register(&ip175c_driver);
}
static void __exit ip175c_exit(void)
{
phy_driver_unregister(&ip175c_driver);
}
module_init(ip175c_init);
module_exit(ip175c_exit);
......@@ -60,6 +60,7 @@
#define MII_M1111_PHY_EXT_SR 0x1b
#define MII_M1111_HWCFG_MODE_MASK 0xf
#define MII_M1111_HWCFG_MODE_RGMII 0xb
#define MII_M1111_HWCFG_MODE_SGMII_NO_CLK 0x4
MODULE_DESCRIPTION("Marvell PHY driver");
MODULE_AUTHOR("Andy Fleming");
......@@ -169,6 +170,21 @@ static int m88e1111_config_init(struct phy_device *phydev)
return err;
}
if (phydev->interface == PHY_INTERFACE_MODE_SGMII) {
int temp;
temp = phy_read(phydev, MII_M1111_PHY_EXT_SR);
if (temp < 0)
return temp;
temp &= ~(MII_M1111_HWCFG_MODE_MASK);
temp |= MII_M1111_HWCFG_MODE_SGMII_NO_CLK;
err = phy_write(phydev, MII_M1111_PHY_EXT_SR, temp);
if (err < 0)
return err;
}
err = phy_write(phydev, MII_BMCR, BMCR_RESET);
if (err < 0)
return err;
......
......@@ -2433,37 +2433,22 @@ static int ql_get_seg_count(struct ql3_adapter *qdev,
return -1;
}
static void ql_hw_csum_setup(struct sk_buff *skb,
static void ql_hw_csum_setup(const struct sk_buff *skb,
struct ob_mac_iocb_req *mac_iocb_ptr)
{
struct ethhdr *eth;
struct iphdr *ip = NULL;
u8 offset = ETH_HLEN;
const struct iphdr *ip = ip_hdr(skb);
eth = (struct ethhdr *)(skb->data);
mac_iocb_ptr->ip_hdr_off = skb_network_offset(skb);
mac_iocb_ptr->ip_hdr_len = ip->ihl;
if (eth->h_proto == __constant_htons(ETH_P_IP)) {
ip = (struct iphdr *)&skb->data[ETH_HLEN];
} else if (eth->h_proto == htons(ETH_P_8021Q) &&
((struct vlan_ethhdr *)skb->data)->
h_vlan_encapsulated_proto == __constant_htons(ETH_P_IP)) {
ip = (struct iphdr *)&skb->data[VLAN_ETH_HLEN];
offset = VLAN_ETH_HLEN;
}
if (ip) {
if (ip->protocol == IPPROTO_TCP) {
mac_iocb_ptr->flags1 |= OB_3032MAC_IOCB_REQ_TC |
if (ip->protocol == IPPROTO_TCP) {
mac_iocb_ptr->flags1 |= OB_3032MAC_IOCB_REQ_TC |
OB_3032MAC_IOCB_REQ_IC;
mac_iocb_ptr->ip_hdr_off = offset;
mac_iocb_ptr->ip_hdr_len = ip->ihl;
} else if (ip->protocol == IPPROTO_UDP) {
mac_iocb_ptr->flags1 |= OB_3032MAC_IOCB_REQ_UC |
} else {
mac_iocb_ptr->flags1 |= OB_3032MAC_IOCB_REQ_UC |
OB_3032MAC_IOCB_REQ_IC;
mac_iocb_ptr->ip_hdr_off = offset;
mac_iocb_ptr->ip_hdr_len = ip->ihl;
}
}
}
/*
......
This diff is collapsed.
......@@ -469,11 +469,18 @@ static struct pci_device_id s2io_tbl[] __devinitdata = {
MODULE_DEVICE_TABLE(pci, s2io_tbl);
static struct pci_error_handlers s2io_err_handler = {
.error_detected = s2io_io_error_detected,
.slot_reset = s2io_io_slot_reset,
.resume = s2io_io_resume,
};
static struct pci_driver s2io_driver = {
.name = "S2IO",
.id_table = s2io_tbl,
.probe = s2io_init_nic,
.remove = __devexit_p(s2io_rem_nic),
.err_handler = &s2io_err_handler,
};
/* A simplifier macro used both by init and free shared_mem Fns(). */
......@@ -2689,6 +2696,9 @@ static void s2io_netpoll(struct net_device *dev)
u64 val64 = 0xFFFFFFFFFFFFFFFFULL;
int i;
if (pci_channel_offline(nic->pdev))
return;
disable_irq(dev->irq);
atomic_inc(&nic->isr_cnt);
......@@ -3215,6 +3225,8 @@ static void alarm_intr_handler(struct s2io_nic *nic)
int i;
if (atomic_read(&nic->card_state) == CARD_DOWN)
return;
if (pci_channel_offline(nic->pdev))
return;
nic->mac_control.stats_info->sw_stat.ring_full_cnt = 0;
/* Handling the XPAK counters update */
if(nic->mac_control.stats_info->xpak_stat.xpak_timer_count < 72000) {
......@@ -3958,7 +3970,6 @@ static int s2io_close(struct net_device *dev)
/* Reset card, kill tasklet and free Tx and Rx buffers. */
s2io_card_down(sp);
sp->device_close_flag = TRUE; /* Device is shut down. */
return 0;
}
......@@ -4314,6 +4325,10 @@ static irqreturn_t s2io_isr(int irq, void *dev_id)
struct mac_info *mac_control;
struct config_param *config;
/* Pretend we handled any irq's from a disconnected card */
if (pci_channel_offline(sp->pdev))
return IRQ_NONE;
atomic_inc(&sp->isr_cnt);
mac_control = &sp->mac_control;
config = &sp->config;
......@@ -6569,7 +6584,7 @@ static void s2io_rem_isr(struct s2io_nic * sp)
} while(cnt < 5);
}
static void s2io_card_down(struct s2io_nic * sp)
static void do_s2io_card_down(struct s2io_nic * sp, int do_io)
{
int cnt = 0;
struct XENA_dev_config __iomem *bar0 = sp->bar0;
......@@ -6584,7 +6599,8 @@ static void s2io_card_down(struct s2io_nic * sp)
atomic_set(&sp->card_state, CARD_DOWN);
/* disable Tx and Rx traffic on the NIC */
stop_nic(sp);
if (do_io)
stop_nic(sp);
s2io_rem_isr(sp);
......@@ -6592,7 +6608,7 @@ static void s2io_card_down(struct s2io_nic * sp)
tasklet_kill(&sp->task);
/* Check if the device is Quiescent and then Reset the NIC */
do {
while(do_io) {
/* As per the HW requirement we need to replenish the
* receive buffer to avoid the ring bump. Since there is
* no intention of processing the Rx frame at this pointwe are
......@@ -6617,8 +6633,9 @@ static void s2io_card_down(struct s2io_nic * sp)
(unsigned long long) val64);
break;
}
} while (1);
s2io_reset(sp);
}
if (do_io)
s2io_reset(sp);
spin_lock_irqsave(&sp->tx_lock, flags);
/* Free all Tx buffers */
......@@ -6633,6 +6650,11 @@ static void s2io_card_down(struct s2io_nic * sp)
clear_bit(0, &(sp->link_state));
}
static void s2io_card_down(struct s2io_nic * sp)
{
do_s2io_card_down(sp, 1);
}
static int s2io_card_up(struct s2io_nic * sp)
{
int i, ret = 0;
......@@ -8010,3 +8032,85 @@ static void lro_append_pkt(struct s2io_nic *sp, struct lro *lro,
sp->mac_control.stats_info->sw_stat.clubbed_frms_cnt++;
return;
}
/**
* s2io_io_error_detected - called when PCI error is detected
* @pdev: Pointer to PCI device
* @state: The current pci conneection state
*
* This function is called after a PCI bus error affecting
* this device has been detected.
*/
static pci_ers_result_t s2io_io_error_detected(struct pci_dev *pdev,
pci_channel_state_t state)
{
struct net_device *netdev = pci_get_drvdata(pdev);
struct s2io_nic *sp = netdev->priv;
netif_device_detach(netdev);
if (netif_running(netdev)) {
/* Bring down the card, while avoiding PCI I/O */
do_s2io_card_down(sp, 0);
}
pci_disable_device(pdev);
return PCI_ERS_RESULT_NEED_RESET;
}
/**
* s2io_io_slot_reset - called after the pci bus has been reset.
* @pdev: Pointer to PCI device
*
* Restart the card from scratch, as if from a cold-boot.
* At this point, the card has exprienced a hard reset,
* followed by fixups by BIOS, and has its config space
* set up identically to what it was at cold boot.
*/
static pci_ers_result_t s2io_io_slot_reset(struct pci_dev *pdev)
{
struct net_device *netdev = pci_get_drvdata(pdev);
struct s2io_nic *sp = netdev->priv;
if (pci_enable_device(pdev)) {
printk(KERN_ERR "s2io: "
"Cannot re-enable PCI device after reset.\n");
return PCI_ERS_RESULT_DISCONNECT;
}
pci_set_master(pdev);
s2io_reset(sp);
return PCI_ERS_RESULT_RECOVERED;
}
/**
* s2io_io_resume - called when traffic can start flowing again.
* @pdev: Pointer to PCI device
*
* This callback is called when the error recovery driver tells
* us that its OK to resume normal operation.
*/
static void s2io_io_resume(struct pci_dev *pdev)
{
struct net_device *netdev = pci_get_drvdata(pdev);
struct s2io_nic *sp = netdev->priv;
if (netif_running(netdev)) {
if (s2io_card_up(sp)) {
printk(KERN_ERR "s2io: "
"Can't bring device back up after reset.\n");
return;
}
if (s2io_set_mac_addr(netdev, netdev->dev_addr) == FAILURE) {
s2io_card_down(sp);
printk(KERN_ERR "s2io: "
"Can't resetore mac addr after reset.\n");
return;
}
}
netif_device_attach(netdev);
netif_wake_queue(netdev);
}
......@@ -794,7 +794,6 @@ struct s2io_nic {
struct net_device_stats stats;
int high_dma_flag;
int device_close_flag;
int device_enabled_once;
char name[60];
......@@ -1052,6 +1051,11 @@ static void lro_append_pkt(struct s2io_nic *sp, struct lro *lro,
struct sk_buff *skb, u32 tcp_len);
static int rts_ds_steer(struct s2io_nic *nic, u8 ds_codepoint, u8 ring);
static pci_ers_result_t s2io_io_error_detected(struct pci_dev *pdev,
pci_channel_state_t state);
static pci_ers_result_t s2io_io_slot_reset(struct pci_dev *pdev);
static void s2io_io_resume(struct pci_dev *pdev);
#define s2io_tcp_mss(skb) skb_shinfo(skb)->gso_size
#define s2io_udp_mss(skb) skb_shinfo(skb)->gso_size
#define s2io_offload_type(skb) skb_shinfo(skb)->gso_type
......
This diff is collapsed.
This diff is collapsed.
/*
* sni_82596.c -- driver for intel 82596 ethernet controller, as
* used in older SNI RM machines
*/
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/string.h>
#include <linux/errno.h>
#include <linux/ioport.h>
#include <linux/slab.h>
#include <linux/interrupt.h>
#include <linux/delay.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/skbuff.h>
#include <linux/init.h>
#include <linux/types.h>
#include <linux/bitops.h>
#include <linux/platform_device.h>
#include <linux/io.h>
#include <linux/irq.h>
#define SNI_82596_DRIVER_VERSION "SNI RM 82596 driver - Revision: 0.01"
static const char sni_82596_string[] = "snirm_82596";
#define DMA_ALLOC dma_alloc_coherent
#define DMA_FREE dma_free_coherent
#define DMA_WBACK(priv, addr, len) do { } while (0)
#define DMA_INV(priv, addr, len) do { } while (0)
#define DMA_WBACK_INV(priv, addr, len) do { } while (0)
#define SYSBUS 0x00004400
/* big endian CPU, 82596 little endian */
#define SWAP32(x) cpu_to_le32((u32)(x))
#define SWAP16(x) cpu_to_le16((u16)(x))
#define OPT_MPU_16BIT 0x01
#include "lib82596.c"
MODULE_AUTHOR("Thomas Bogendoerfer");
MODULE_DESCRIPTION("i82596 driver");
MODULE_LICENSE("GPL");
module_param(i596_debug, int, 0);
MODULE_PARM_DESC(i596_debug, "82596 debug mask");
static inline void ca(struct net_device *dev)
{
struct i596_private *lp = netdev_priv(dev);
writel(0, lp->ca);
}
static void mpu_port(struct net_device *dev, int c, dma_addr_t x)
{
struct i596_private *lp = netdev_priv(dev);
u32 v = (u32) (c) | (u32) (x);
if (lp->options & OPT_MPU_16BIT) {
writew(v & 0xffff, lp->mpu_port);
wmb(); /* order writes to MPU port */
udelay(1);
writew(v >> 16, lp->mpu_port);
} else {
writel(v, lp->mpu_port);
wmb(); /* order writes to MPU port */
udelay(1);
writel(v, lp->mpu_port);
}
}
static int __devinit sni_82596_probe(struct platform_device *dev)
{
struct net_device *netdevice;
struct i596_private *lp;
struct resource *res, *ca, *idprom, *options;
int retval = -ENOMEM;
void __iomem *mpu_addr;
void __iomem *ca_addr;
u8 __iomem *eth_addr;
res = platform_get_resource(dev, IORESOURCE_MEM, 0);
ca = platform_get_resource(dev, IORESOURCE_MEM, 1);
options = platform_get_resource(dev, 0, 0);
idprom = platform_get_resource(dev, IORESOURCE_MEM, 2);
if (!res || !ca || !options || !idprom)
return -ENODEV;
mpu_addr = ioremap_nocache(res->start, 4);
if (!mpu_addr)
return -ENOMEM;
ca_addr = ioremap_nocache(ca->start, 4);
if (!ca_addr)
goto probe_failed_free_mpu;
printk(KERN_INFO "Found i82596 at 0x%x\n", res->start);
netdevice = alloc_etherdev(sizeof(struct i596_private));
if (!netdevice)
goto probe_failed_free_ca;
SET_NETDEV_DEV(netdevice, &dev->dev);
platform_set_drvdata (dev, netdevice);
netdevice->base_addr = res->start;
netdevice->irq = platform_get_irq(dev, 0);
eth_addr = ioremap_nocache(idprom->start, 0x10);
if (!eth_addr)
goto probe_failed;
/* someone seems to like messed up stuff */
netdevice->dev_addr[0] = readb(eth_addr + 0x0b);
netdevice->dev_addr[1] = readb(eth_addr + 0x0a);
netdevice->dev_addr[2] = readb(eth_addr + 0x09);
netdevice->dev_addr[3] = readb(eth_addr + 0x08);
netdevice->dev_addr[4] = readb(eth_addr + 0x07);
netdevice->dev_addr[5] = readb(eth_addr + 0x06);
iounmap(eth_addr);
if (!netdevice->irq) {
printk(KERN_ERR "%s: IRQ not found for i82596 at 0x%lx\n",
__FILE__, netdevice->base_addr);
goto probe_failed;
}
lp = netdev_priv(netdevice);
lp->options = options->flags & IORESOURCE_BITS;
lp->ca = ca_addr;
lp->mpu_port = mpu_addr;
retval = i82596_probe(netdevice);
if (retval == 0)
return 0;
probe_failed:
free_netdev(netdevice);
probe_failed_free_ca:
iounmap(ca_addr);
probe_failed_free_mpu:
iounmap(mpu_addr);
return retval;
}
static int __devexit sni_82596_driver_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
struct i596_private *lp = netdev_priv(dev);
unregister_netdev(dev);
DMA_FREE(dev->dev.parent, sizeof(struct i596_private),
lp->dma, lp->dma_addr);
iounmap(lp->ca);
iounmap(lp->mpu_port);
free_netdev (dev);
return 0;
}
static struct platform_driver sni_82596_driver = {
.probe = sni_82596_probe,
.remove = __devexit_p(sni_82596_driver_remove),
.driver = {
.name = sni_82596_string,
},
};
static int __devinit sni_82596_init(void)
{
printk(KERN_INFO SNI_82596_DRIVER_VERSION "\n");
return platform_driver_register(&sni_82596_driver);
}
static void __exit sni_82596_exit(void)
{
platform_driver_unregister(&sni_82596_driver);
}
module_init(sni_82596_init);
module_exit(sni_82596_exit);
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -785,7 +785,6 @@ static void __de_set_rx_mode (struct net_device *dev)
de->tx_head = NEXT_TX(entry);
BUG_ON(TX_BUFFS_AVAIL(de) < 0);
if (TX_BUFFS_AVAIL(de) == 0)
netif_stop_queue(dev);
......
This diff is collapsed.
......@@ -892,15 +892,6 @@
#define ALL 0 /* Clear out all the setup frame */
#define PHYS_ADDR_ONLY 1 /* Update the physical address only */
/*
** Booleans
*/
#define NO 0
#define FALSE 0
#define YES ~0
#define TRUE ~0
/*
** Adapter state
*/
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -44,3 +44,6 @@ obj-$(CONFIG_PCMCIA_WL3501) += wl3501_cs.o
obj-$(CONFIG_USB_ZD1201) += zd1201.o
obj-$(CONFIG_LIBERTAS_USB) += libertas/
rtl8187-objs := rtl8187_dev.o rtl8187_rtl8225.o
obj-$(CONFIG_RTL8187) += rtl8187.o
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment