Commit a56cddd8 authored by David S. Miller's avatar David S. Miller

Merge branch 'bcmgenet'

Florian Fainelli says:

====================
Support for the Broadcom GENET driver

This patchset adds support for the Broadcom GENET Gigabit Ethernet MAC
controller. This controller is found on the Broadcom BCM7xxx Set Top Box
System-on-a-chips.

Changes since v4:
- add dependency on CONFIG_OF

Changes since v3:
- fixed Kconfig dependency on FIXED_PHY

Changes since v2:
- dropped the patch that adds an "internal" phy-mode
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents c045a734 32ec90d5
* Broadcom BCM7xxx Ethernet Controller (GENET)
Required properties:
- compatible: should contain one of "brcm,genet-v1", "brcm,genet-v2",
"brcm,genet-v3", "brcm,genet-v4".
- reg: address and length of the register set for the device
- interrupts: must be two cells, the first cell is the general purpose
interrupt line, while the second cell is the interrupt for the ring
RX and TX queues operating in ring mode
- phy-mode: String, operation mode of the PHY interface. Supported values are
"mii", "rgmii", "rgmii-txid", "rev-mii", "moca". Analogous to ePAPR
"phy-connection-type" values
- address-cells: should be 1
- size-cells: should be 1
Optional properties:
- clocks: When provided, must be two cells, first one is the main GENET clock
and the second cell is the Genet Wake-on-LAN clock.
- phy-handle: A phandle to a phy node defining the PHY address (as the reg
property, a single integer), used to describe configurations where a PHY
(internal or external) is used.
- fixed-link: When the GENET interface is connected to a MoCA hardware block or
when operating in a RGMII to RGMII type of connection, or when the MDIO bus is
voluntarily disabled, this property should be used to describe the "fixed link".
See Documentation/devicetree/bindings/net/fsl-tsec-phy.txt for information on
the property specifics
Required child nodes:
- mdio bus node: this node should always be present regarless of the PHY
configuration of the GENET instance
MDIO bus node required properties:
- compatible: should contain one of "brcm,genet-mdio-v1", "brcm,genet-mdio-v2"
"brcm,genet-mdio-v3", "brcm,genet-mdio-v4", the version has to match the
parent node compatible property (e.g: brcm,genet-v4 pairs with
brcm,genet-mdio-v4)
- reg: address and length relative to the parent node base register address
- address-cells: address cell for MDIO bus addressing, should be 1
- size-cells: size of the cells for MDIO bus addressing, should be 0
Ethernet PHY node properties:
See Documentation/devicetree/bindings/net/phy.txt for the list of required and
optional properties.
Internal Gigabit PHY example:
ethernet@f0b60000 {
phy-mode = "internal";
phy-handle = <&phy1>;
mac-address = [ 00 10 18 36 23 1a ];
compatible = "brcm,genet-v4";
#address-cells = <0x1>;
#size-cells = <0x1>;
reg = <0xf0b60000 0xfc4c>;
interrupts = <0x0 0x14 0x0>, <0x0 0x15 0x0>;
mdio@e14 {
compatible = "brcm,genet-mdio-v4";
#address-cells = <0x1>;
#size-cells = <0x0>;
reg = <0xe14 0x8>;
phy1: ethernet-phy@1 {
max-speed = <1000>;
reg = <0x1>;
compatible = "brcm,28nm-gphy", "ethernet-phy-ieee802.3-c22";
};
};
};
MoCA interface / MAC to MAC example:
ethernet@f0b80000 {
phy-mode = "moca";
fixed-link = <1 0 1000 0 0>;
mac-address = [ 00 10 18 36 24 1a ];
compatible = "brcm,genet-v4";
#address-cells = <0x1>;
#size-cells = <0x1>;
reg = <0xf0b80000 0xfc4c>;
interrupts = <0x0 0x16 0x0>, <0x0 0x17 0x0>;
mdio@e14 {
compatible = "brcm,genet-mdio-v4";
#address-cells = <0x1>;
#size-cells = <0x0>;
reg = <0xe14 0x8>;
};
};
External MDIO-connected Gigabit PHY/switch:
ethernet@f0ba0000 {
phy-mode = "rgmii";
phy-handle = <&phy0>;
mac-address = [ 00 10 18 36 26 1a ];
compatible = "brcm,genet-v4";
#address-cells = <0x1>;
#size-cells = <0x1>;
reg = <0xf0ba0000 0xfc4c>;
interrupts = <0x0 0x18 0x0>, <0x0 0x19 0x0>;
mdio@0e14 {
compatible = "brcm,genet-mdio-v4";
#address-cells = <0x1>;
#size-cells = <0x0>;
reg = <0xe14 0x8>;
phy0: ethernet-phy@0 {
max-speed = <1000>;
reg = <0x0>;
compatible = "brcm,bcm53125", "ethernet-phy-ieee802.3-c22";
};
};
};
...@@ -1845,6 +1845,12 @@ L: netdev@vger.kernel.org ...@@ -1845,6 +1845,12 @@ L: netdev@vger.kernel.org
S: Supported S: Supported
F: drivers/net/ethernet/broadcom/b44.* F: drivers/net/ethernet/broadcom/b44.*
BROADCOM GENET ETHERNET DRIVER
M: Florian Fainelli <f.fainelli@gmail.com>
L: netdev@vger.kernel.org
S: Supported
F: drivers/net/ethernet/broadcom/genet/
BROADCOM BNX2 GIGABIT ETHERNET DRIVER BROADCOM BNX2 GIGABIT ETHERNET DRIVER
M: Michael Chan <mchan@broadcom.com> M: Michael Chan <mchan@broadcom.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
......
...@@ -60,6 +60,17 @@ config BCM63XX_ENET ...@@ -60,6 +60,17 @@ config BCM63XX_ENET
This driver supports the ethernet MACs in the Broadcom 63xx This driver supports the ethernet MACs in the Broadcom 63xx
MIPS chipset family (BCM63XX). MIPS chipset family (BCM63XX).
config BCMGENET
tristate "Broadcom GENET internal MAC support"
depends on OF
select MII
select PHYLIB
select FIXED_PHY if BCMGENET=y
select BCM7XXX_PHY
help
This driver supports the built-in Ethernet MACs found in the
Broadcom BCM7xxx Set Top Box family chipset.
config BNX2 config BNX2
tristate "Broadcom NetXtremeII support" tristate "Broadcom NetXtremeII support"
depends on PCI depends on PCI
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
obj-$(CONFIG_B44) += b44.o obj-$(CONFIG_B44) += b44.o
obj-$(CONFIG_BCM63XX_ENET) += bcm63xx_enet.o obj-$(CONFIG_BCM63XX_ENET) += bcm63xx_enet.o
obj-$(CONFIG_BCMGENET) += genet/
obj-$(CONFIG_BNX2) += bnx2.o obj-$(CONFIG_BNX2) += bnx2.o
obj-$(CONFIG_CNIC) += cnic.o obj-$(CONFIG_CNIC) += cnic.o
obj-$(CONFIG_BNX2X) += bnx2x/ obj-$(CONFIG_BNX2X) += bnx2x/
......
obj-$(CONFIG_BCMGENET) += genet.o
genet-objs := bcmgenet.o bcmmii.o
/*
* Broadcom GENET (Gigabit Ethernet) controller driver
*
* Copyright (c) 2014 Broadcom Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
#define pr_fmt(fmt) "bcmgenet: " fmt
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/sched.h>
#include <linux/types.h>
#include <linux/fcntl.h>
#include <linux/interrupt.h>
#include <linux/string.h>
#include <linux/if_ether.h>
#include <linux/init.h>
#include <linux/errno.h>
#include <linux/delay.h>
#include <linux/platform_device.h>
#include <linux/dma-mapping.h>
#include <linux/pm.h>
#include <linux/clk.h>
#include <linux/version.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/of_net.h>
#include <linux/of_platform.h>
#include <net/arp.h>
#include <linux/mii.h>
#include <linux/ethtool.h>
#include <linux/netdevice.h>
#include <linux/inetdevice.h>
#include <linux/etherdevice.h>
#include <linux/skbuff.h>
#include <linux/in.h>
#include <linux/ip.h>
#include <linux/ipv6.h>
#include <linux/phy.h>
#include <asm/unaligned.h>
#include "bcmgenet.h"
/* Maximum number of hardware queues, downsized if needed */
#define GENET_MAX_MQ_CNT 4
/* Default highest priority queue for multi queue support */
#define GENET_Q0_PRIORITY 0
#define GENET_DEFAULT_BD_CNT \
(TOTAL_DESC - priv->hw_params->tx_queues * priv->hw_params->bds_cnt)
#define RX_BUF_LENGTH 2048
#define SKB_ALIGNMENT 32
/* Tx/Rx DMA register offset, skip 256 descriptors */
#define WORDS_PER_BD(p) (p->hw_params->words_per_bd)
#define DMA_DESC_SIZE (WORDS_PER_BD(priv) * sizeof(u32))
#define GENET_TDMA_REG_OFF (priv->hw_params->tdma_offset + \
TOTAL_DESC * DMA_DESC_SIZE)
#define GENET_RDMA_REG_OFF (priv->hw_params->rdma_offset + \
TOTAL_DESC * DMA_DESC_SIZE)
static inline void dmadesc_set_length_status(struct bcmgenet_priv *priv,
void __iomem *d, u32 value)
{
__raw_writel(value, d + DMA_DESC_LENGTH_STATUS);
}
static inline u32 dmadesc_get_length_status(struct bcmgenet_priv *priv,
void __iomem *d)
{
return __raw_readl(d + DMA_DESC_LENGTH_STATUS);
}
static inline void dmadesc_set_addr(struct bcmgenet_priv *priv,
void __iomem *d,
dma_addr_t addr)
{
__raw_writel(lower_32_bits(addr), d + DMA_DESC_ADDRESS_LO);
/* Register writes to GISB bus can take couple hundred nanoseconds
* and are done for each packet, save these expensive writes unless
* the platform is explicitely configured for 64-bits/LPAE.
*/
#ifdef CONFIG_PHYS_ADDR_T_64BIT
if (priv->hw_params->flags & GENET_HAS_40BITS)
__raw_writel(upper_32_bits(addr), d + DMA_DESC_ADDRESS_HI);
#endif
}
/* Combined address + length/status setter */
static inline void dmadesc_set(struct bcmgenet_priv *priv,
void __iomem *d, dma_addr_t addr, u32 val)
{
dmadesc_set_length_status(priv, d, val);
dmadesc_set_addr(priv, d, addr);
}
static inline dma_addr_t dmadesc_get_addr(struct bcmgenet_priv *priv,
void __iomem *d)
{
dma_addr_t addr;
addr = __raw_readl(d + DMA_DESC_ADDRESS_LO);
/* Register writes to GISB bus can take couple hundred nanoseconds
* and are done for each packet, save these expensive writes unless
* the platform is explicitely configured for 64-bits/LPAE.
*/
#ifdef CONFIG_PHYS_ADDR_T_64BIT
if (priv->hw_params->flags & GENET_HAS_40BITS)
addr |= (u64)__raw_readl(d + DMA_DESC_ADDRESS_HI) << 32;
#endif
return addr;
}
#define GENET_VER_FMT "%1d.%1d EPHY: 0x%04x"
#define GENET_MSG_DEFAULT (NETIF_MSG_DRV | NETIF_MSG_PROBE | \
NETIF_MSG_LINK)
static inline u32 bcmgenet_rbuf_ctrl_get(struct bcmgenet_priv *priv)
{
if (GENET_IS_V1(priv))
return bcmgenet_rbuf_readl(priv, RBUF_FLUSH_CTRL_V1);
else
return bcmgenet_sys_readl(priv, SYS_RBUF_FLUSH_CTRL);
}
static inline void bcmgenet_rbuf_ctrl_set(struct bcmgenet_priv *priv, u32 val)
{
if (GENET_IS_V1(priv))
bcmgenet_rbuf_writel(priv, val, RBUF_FLUSH_CTRL_V1);
else
bcmgenet_sys_writel(priv, val, SYS_RBUF_FLUSH_CTRL);
}
/* These macros are defined to deal with register map change
* between GENET1.1 and GENET2. Only those currently being used
* by driver are defined.
*/
static inline u32 bcmgenet_tbuf_ctrl_get(struct bcmgenet_priv *priv)
{
if (GENET_IS_V1(priv))
return bcmgenet_rbuf_readl(priv, TBUF_CTRL_V1);
else
return __raw_readl(priv->base +
priv->hw_params->tbuf_offset + TBUF_CTRL);
}
static inline void bcmgenet_tbuf_ctrl_set(struct bcmgenet_priv *priv, u32 val)
{
if (GENET_IS_V1(priv))
bcmgenet_rbuf_writel(priv, val, TBUF_CTRL_V1);
else
__raw_writel(val, priv->base +
priv->hw_params->tbuf_offset + TBUF_CTRL);
}
static inline u32 bcmgenet_bp_mc_get(struct bcmgenet_priv *priv)
{
if (GENET_IS_V1(priv))
return bcmgenet_rbuf_readl(priv, TBUF_BP_MC_V1);
else
return __raw_readl(priv->base +
priv->hw_params->tbuf_offset + TBUF_BP_MC);
}
static inline void bcmgenet_bp_mc_set(struct bcmgenet_priv *priv, u32 val)
{
if (GENET_IS_V1(priv))
bcmgenet_rbuf_writel(priv, val, TBUF_BP_MC_V1);
else
__raw_writel(val, priv->base +
priv->hw_params->tbuf_offset + TBUF_BP_MC);
}
/* RX/TX DMA register accessors */
enum dma_reg {
DMA_RING_CFG = 0,
DMA_CTRL,
DMA_STATUS,
DMA_SCB_BURST_SIZE,
DMA_ARB_CTRL,
DMA_PRIORITY,
DMA_RING_PRIORITY,
};
static const u8 bcmgenet_dma_regs_v3plus[] = {
[DMA_RING_CFG] = 0x00,
[DMA_CTRL] = 0x04,
[DMA_STATUS] = 0x08,
[DMA_SCB_BURST_SIZE] = 0x0C,
[DMA_ARB_CTRL] = 0x2C,
[DMA_PRIORITY] = 0x30,
[DMA_RING_PRIORITY] = 0x38,
};
static const u8 bcmgenet_dma_regs_v2[] = {
[DMA_RING_CFG] = 0x00,
[DMA_CTRL] = 0x04,
[DMA_STATUS] = 0x08,
[DMA_SCB_BURST_SIZE] = 0x0C,
[DMA_ARB_CTRL] = 0x30,
[DMA_PRIORITY] = 0x34,
[DMA_RING_PRIORITY] = 0x3C,
};
static const u8 bcmgenet_dma_regs_v1[] = {
[DMA_CTRL] = 0x00,
[DMA_STATUS] = 0x04,
[DMA_SCB_BURST_SIZE] = 0x0C,
[DMA_ARB_CTRL] = 0x30,
[DMA_PRIORITY] = 0x34,
[DMA_RING_PRIORITY] = 0x3C,
};
/* Set at runtime once bcmgenet version is known */
static const u8 *bcmgenet_dma_regs;
static inline struct bcmgenet_priv *dev_to_priv(struct device *dev)
{
return netdev_priv(dev_get_drvdata(dev));
}
static inline u32 bcmgenet_tdma_readl(struct bcmgenet_priv *priv,
enum dma_reg r)
{
return __raw_readl(priv->base + GENET_TDMA_REG_OFF +
DMA_RINGS_SIZE + bcmgenet_dma_regs[r]);
}
static inline void bcmgenet_tdma_writel(struct bcmgenet_priv *priv,
u32 val, enum dma_reg r)
{
__raw_writel(val, priv->base + GENET_TDMA_REG_OFF +
DMA_RINGS_SIZE + bcmgenet_dma_regs[r]);
}
static inline u32 bcmgenet_rdma_readl(struct bcmgenet_priv *priv,
enum dma_reg r)
{
return __raw_readl(priv->base + GENET_RDMA_REG_OFF +
DMA_RINGS_SIZE + bcmgenet_dma_regs[r]);
}
static inline void bcmgenet_rdma_writel(struct bcmgenet_priv *priv,
u32 val, enum dma_reg r)
{
__raw_writel(val, priv->base + GENET_RDMA_REG_OFF +
DMA_RINGS_SIZE + bcmgenet_dma_regs[r]);
}
/* RDMA/TDMA ring registers and accessors
* we merge the common fields and just prefix with T/D the registers
* having different meaning depending on the direction
*/
enum dma_ring_reg {
TDMA_READ_PTR = 0,
RDMA_WRITE_PTR = TDMA_READ_PTR,
TDMA_READ_PTR_HI,
RDMA_WRITE_PTR_HI = TDMA_READ_PTR_HI,
TDMA_CONS_INDEX,
RDMA_PROD_INDEX = TDMA_CONS_INDEX,
TDMA_PROD_INDEX,
RDMA_CONS_INDEX = TDMA_PROD_INDEX,
DMA_RING_BUF_SIZE,
DMA_START_ADDR,
DMA_START_ADDR_HI,
DMA_END_ADDR,
DMA_END_ADDR_HI,
DMA_MBUF_DONE_THRESH,
TDMA_FLOW_PERIOD,
RDMA_XON_XOFF_THRESH = TDMA_FLOW_PERIOD,
TDMA_WRITE_PTR,
RDMA_READ_PTR = TDMA_WRITE_PTR,
TDMA_WRITE_PTR_HI,
RDMA_READ_PTR_HI = TDMA_WRITE_PTR_HI
};
/* GENET v4 supports 40-bits pointer addressing
* for obvious reasons the LO and HI word parts
* are contiguous, but this offsets the other
* registers.
*/
static const u8 genet_dma_ring_regs_v4[] = {
[TDMA_READ_PTR] = 0x00,
[TDMA_READ_PTR_HI] = 0x04,
[TDMA_CONS_INDEX] = 0x08,
[TDMA_PROD_INDEX] = 0x0C,
[DMA_RING_BUF_SIZE] = 0x10,
[DMA_START_ADDR] = 0x14,
[DMA_START_ADDR_HI] = 0x18,
[DMA_END_ADDR] = 0x1C,
[DMA_END_ADDR_HI] = 0x20,
[DMA_MBUF_DONE_THRESH] = 0x24,
[TDMA_FLOW_PERIOD] = 0x28,
[TDMA_WRITE_PTR] = 0x2C,
[TDMA_WRITE_PTR_HI] = 0x30,
};
static const u8 genet_dma_ring_regs_v123[] = {
[TDMA_READ_PTR] = 0x00,
[TDMA_CONS_INDEX] = 0x04,
[TDMA_PROD_INDEX] = 0x08,
[DMA_RING_BUF_SIZE] = 0x0C,
[DMA_START_ADDR] = 0x10,
[DMA_END_ADDR] = 0x14,
[DMA_MBUF_DONE_THRESH] = 0x18,
[TDMA_FLOW_PERIOD] = 0x1C,
[TDMA_WRITE_PTR] = 0x20,
};
/* Set at runtime once GENET version is known */
static const u8 *genet_dma_ring_regs;
static inline u32 bcmgenet_tdma_ring_readl(struct bcmgenet_priv *priv,
unsigned int ring,
enum dma_ring_reg r)
{
return __raw_readl(priv->base + GENET_TDMA_REG_OFF +
(DMA_RING_SIZE * ring) +
genet_dma_ring_regs[r]);
}
static inline void bcmgenet_tdma_ring_writel(struct bcmgenet_priv *priv,
unsigned int ring,
u32 val,
enum dma_ring_reg r)
{
__raw_writel(val, priv->base + GENET_TDMA_REG_OFF +
(DMA_RING_SIZE * ring) +
genet_dma_ring_regs[r]);
}
static inline u32 bcmgenet_rdma_ring_readl(struct bcmgenet_priv *priv,
unsigned int ring,
enum dma_ring_reg r)
{
return __raw_readl(priv->base + GENET_RDMA_REG_OFF +
(DMA_RING_SIZE * ring) +
genet_dma_ring_regs[r]);
}
static inline void bcmgenet_rdma_ring_writel(struct bcmgenet_priv *priv,
unsigned int ring,
u32 val,
enum dma_ring_reg r)
{
__raw_writel(val, priv->base + GENET_RDMA_REG_OFF +
(DMA_RING_SIZE * ring) +
genet_dma_ring_regs[r]);
}
static int bcmgenet_get_settings(struct net_device *dev,
struct ethtool_cmd *cmd)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
if (!netif_running(dev))
return -EINVAL;
if (!priv->phydev)
return -ENODEV;
return phy_ethtool_gset(priv->phydev, cmd);
}
static int bcmgenet_set_settings(struct net_device *dev,
struct ethtool_cmd *cmd)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
if (!netif_running(dev))
return -EINVAL;
if (!priv->phydev)
return -ENODEV;
return phy_ethtool_sset(priv->phydev, cmd);
}
static int bcmgenet_set_rx_csum(struct net_device *dev,
netdev_features_t wanted)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
u32 rbuf_chk_ctrl;
bool rx_csum_en;
rx_csum_en = !!(wanted & NETIF_F_RXCSUM);
rbuf_chk_ctrl = bcmgenet_rbuf_readl(priv, RBUF_CHK_CTRL);
/* enable rx checksumming */
if (rx_csum_en)
rbuf_chk_ctrl |= RBUF_RXCHK_EN;
else
rbuf_chk_ctrl &= ~RBUF_RXCHK_EN;
priv->desc_rxchk_en = rx_csum_en;
bcmgenet_rbuf_writel(priv, rbuf_chk_ctrl, RBUF_CHK_CTRL);
return 0;
}
static int bcmgenet_set_tx_csum(struct net_device *dev,
netdev_features_t wanted)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
bool desc_64b_en;
u32 tbuf_ctrl, rbuf_ctrl;
tbuf_ctrl = bcmgenet_tbuf_ctrl_get(priv);
rbuf_ctrl = bcmgenet_rbuf_readl(priv, RBUF_CTRL);
desc_64b_en = !!(wanted & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM));
/* enable 64 bytes descriptor in both directions (RBUF and TBUF) */
if (desc_64b_en) {
tbuf_ctrl |= RBUF_64B_EN;
rbuf_ctrl |= RBUF_64B_EN;
} else {
tbuf_ctrl &= ~RBUF_64B_EN;
rbuf_ctrl &= ~RBUF_64B_EN;
}
priv->desc_64b_en = desc_64b_en;
bcmgenet_tbuf_ctrl_set(priv, tbuf_ctrl);
bcmgenet_rbuf_writel(priv, rbuf_ctrl, RBUF_CTRL);
return 0;
}
static int bcmgenet_set_features(struct net_device *dev,
netdev_features_t features)
{
netdev_features_t changed = features ^ dev->features;
netdev_features_t wanted = dev->wanted_features;
int ret = 0;
if (changed & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM))
ret = bcmgenet_set_tx_csum(dev, wanted);
if (changed & (NETIF_F_RXCSUM))
ret = bcmgenet_set_rx_csum(dev, wanted);
return ret;
}
static u32 bcmgenet_get_msglevel(struct net_device *dev)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
return priv->msg_enable;
}
static void bcmgenet_set_msglevel(struct net_device *dev, u32 level)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
priv->msg_enable = level;
}
/* standard ethtool support functions. */
enum bcmgenet_stat_type {
BCMGENET_STAT_NETDEV = -1,
BCMGENET_STAT_MIB_RX,
BCMGENET_STAT_MIB_TX,
BCMGENET_STAT_RUNT,
BCMGENET_STAT_MISC,
};
struct bcmgenet_stats {
char stat_string[ETH_GSTRING_LEN];
int stat_sizeof;
int stat_offset;
enum bcmgenet_stat_type type;
/* reg offset from UMAC base for misc counters */
u16 reg_offset;
};
#define STAT_NETDEV(m) { \
.stat_string = __stringify(m), \
.stat_sizeof = sizeof(((struct net_device_stats *)0)->m), \
.stat_offset = offsetof(struct net_device_stats, m), \
.type = BCMGENET_STAT_NETDEV, \
}
#define STAT_GENET_MIB(str, m, _type) { \
.stat_string = str, \
.stat_sizeof = sizeof(((struct bcmgenet_priv *)0)->m), \
.stat_offset = offsetof(struct bcmgenet_priv, m), \
.type = _type, \
}
#define STAT_GENET_MIB_RX(str, m) STAT_GENET_MIB(str, m, BCMGENET_STAT_MIB_RX)
#define STAT_GENET_MIB_TX(str, m) STAT_GENET_MIB(str, m, BCMGENET_STAT_MIB_TX)
#define STAT_GENET_RUNT(str, m) STAT_GENET_MIB(str, m, BCMGENET_STAT_RUNT)
#define STAT_GENET_MISC(str, m, offset) { \
.stat_string = str, \
.stat_sizeof = sizeof(((struct bcmgenet_priv *)0)->m), \
.stat_offset = offsetof(struct bcmgenet_priv, m), \
.type = BCMGENET_STAT_MISC, \
.reg_offset = offset, \
}
/* There is a 0xC gap between the end of RX and beginning of TX stats and then
* between the end of TX stats and the beginning of the RX RUNT
*/
#define BCMGENET_STAT_OFFSET 0xc
/* Hardware counters must be kept in sync because the order/offset
* is important here (order in structure declaration = order in hardware)
*/
static const struct bcmgenet_stats bcmgenet_gstrings_stats[] = {
/* general stats */
STAT_NETDEV(rx_packets),
STAT_NETDEV(tx_packets),
STAT_NETDEV(rx_bytes),
STAT_NETDEV(tx_bytes),
STAT_NETDEV(rx_errors),
STAT_NETDEV(tx_errors),
STAT_NETDEV(rx_dropped),
STAT_NETDEV(tx_dropped),
STAT_NETDEV(multicast),
/* UniMAC RSV counters */
STAT_GENET_MIB_RX("rx_64_octets", mib.rx.pkt_cnt.cnt_64),
STAT_GENET_MIB_RX("rx_65_127_oct", mib.rx.pkt_cnt.cnt_127),
STAT_GENET_MIB_RX("rx_128_255_oct", mib.rx.pkt_cnt.cnt_255),
STAT_GENET_MIB_RX("rx_256_511_oct", mib.rx.pkt_cnt.cnt_511),
STAT_GENET_MIB_RX("rx_512_1023_oct", mib.rx.pkt_cnt.cnt_1023),
STAT_GENET_MIB_RX("rx_1024_1518_oct", mib.rx.pkt_cnt.cnt_1518),
STAT_GENET_MIB_RX("rx_vlan_1519_1522_oct", mib.rx.pkt_cnt.cnt_mgv),
STAT_GENET_MIB_RX("rx_1522_2047_oct", mib.rx.pkt_cnt.cnt_2047),
STAT_GENET_MIB_RX("rx_2048_4095_oct", mib.rx.pkt_cnt.cnt_4095),
STAT_GENET_MIB_RX("rx_4096_9216_oct", mib.rx.pkt_cnt.cnt_9216),
STAT_GENET_MIB_RX("rx_pkts", mib.rx.pkt),
STAT_GENET_MIB_RX("rx_bytes", mib.rx.bytes),
STAT_GENET_MIB_RX("rx_multicast", mib.rx.mca),
STAT_GENET_MIB_RX("rx_broadcast", mib.rx.bca),
STAT_GENET_MIB_RX("rx_fcs", mib.rx.fcs),
STAT_GENET_MIB_RX("rx_control", mib.rx.cf),
STAT_GENET_MIB_RX("rx_pause", mib.rx.pf),
STAT_GENET_MIB_RX("rx_unknown", mib.rx.uo),
STAT_GENET_MIB_RX("rx_align", mib.rx.aln),
STAT_GENET_MIB_RX("rx_outrange", mib.rx.flr),
STAT_GENET_MIB_RX("rx_code", mib.rx.cde),
STAT_GENET_MIB_RX("rx_carrier", mib.rx.fcr),
STAT_GENET_MIB_RX("rx_oversize", mib.rx.ovr),
STAT_GENET_MIB_RX("rx_jabber", mib.rx.jbr),
STAT_GENET_MIB_RX("rx_mtu_err", mib.rx.mtue),
STAT_GENET_MIB_RX("rx_good_pkts", mib.rx.pok),
STAT_GENET_MIB_RX("rx_unicast", mib.rx.uc),
STAT_GENET_MIB_RX("rx_ppp", mib.rx.ppp),
STAT_GENET_MIB_RX("rx_crc", mib.rx.rcrc),
/* UniMAC TSV counters */
STAT_GENET_MIB_TX("tx_64_octets", mib.tx.pkt_cnt.cnt_64),
STAT_GENET_MIB_TX("tx_65_127_oct", mib.tx.pkt_cnt.cnt_127),
STAT_GENET_MIB_TX("tx_128_255_oct", mib.tx.pkt_cnt.cnt_255),
STAT_GENET_MIB_TX("tx_256_511_oct", mib.tx.pkt_cnt.cnt_511),
STAT_GENET_MIB_TX("tx_512_1023_oct", mib.tx.pkt_cnt.cnt_1023),
STAT_GENET_MIB_TX("tx_1024_1518_oct", mib.tx.pkt_cnt.cnt_1518),
STAT_GENET_MIB_TX("tx_vlan_1519_1522_oct", mib.tx.pkt_cnt.cnt_mgv),
STAT_GENET_MIB_TX("tx_1522_2047_oct", mib.tx.pkt_cnt.cnt_2047),
STAT_GENET_MIB_TX("tx_2048_4095_oct", mib.tx.pkt_cnt.cnt_4095),
STAT_GENET_MIB_TX("tx_4096_9216_oct", mib.tx.pkt_cnt.cnt_9216),
STAT_GENET_MIB_TX("tx_pkts", mib.tx.pkts),
STAT_GENET_MIB_TX("tx_multicast", mib.tx.mca),
STAT_GENET_MIB_TX("tx_broadcast", mib.tx.bca),
STAT_GENET_MIB_TX("tx_pause", mib.tx.pf),
STAT_GENET_MIB_TX("tx_control", mib.tx.cf),
STAT_GENET_MIB_TX("tx_fcs_err", mib.tx.fcs),
STAT_GENET_MIB_TX("tx_oversize", mib.tx.ovr),
STAT_GENET_MIB_TX("tx_defer", mib.tx.drf),
STAT_GENET_MIB_TX("tx_excess_defer", mib.tx.edf),
STAT_GENET_MIB_TX("tx_single_col", mib.tx.scl),
STAT_GENET_MIB_TX("tx_multi_col", mib.tx.mcl),
STAT_GENET_MIB_TX("tx_late_col", mib.tx.lcl),
STAT_GENET_MIB_TX("tx_excess_col", mib.tx.ecl),
STAT_GENET_MIB_TX("tx_frags", mib.tx.frg),
STAT_GENET_MIB_TX("tx_total_col", mib.tx.ncl),
STAT_GENET_MIB_TX("tx_jabber", mib.tx.jbr),
STAT_GENET_MIB_TX("tx_bytes", mib.tx.bytes),
STAT_GENET_MIB_TX("tx_good_pkts", mib.tx.pok),
STAT_GENET_MIB_TX("tx_unicast", mib.tx.uc),
/* UniMAC RUNT counters */
STAT_GENET_RUNT("rx_runt_pkts", mib.rx_runt_cnt),
STAT_GENET_RUNT("rx_runt_valid_fcs", mib.rx_runt_fcs),
STAT_GENET_RUNT("rx_runt_inval_fcs_align", mib.rx_runt_fcs_align),
STAT_GENET_RUNT("rx_runt_bytes", mib.rx_runt_bytes),
/* Misc UniMAC counters */
STAT_GENET_MISC("rbuf_ovflow_cnt", mib.rbuf_ovflow_cnt,
UMAC_RBUF_OVFL_CNT),
STAT_GENET_MISC("rbuf_err_cnt", mib.rbuf_err_cnt, UMAC_RBUF_ERR_CNT),
STAT_GENET_MISC("mdf_err_cnt", mib.mdf_err_cnt, UMAC_MDF_ERR_CNT),
};
#define BCMGENET_STATS_LEN ARRAY_SIZE(bcmgenet_gstrings_stats)
static void bcmgenet_get_drvinfo(struct net_device *dev,
struct ethtool_drvinfo *info)
{
strlcpy(info->driver, "bcmgenet", sizeof(info->driver));
strlcpy(info->version, "v2.0", sizeof(info->version));
info->n_stats = BCMGENET_STATS_LEN;
}
static int bcmgenet_get_sset_count(struct net_device *dev, int string_set)
{
switch (string_set) {
case ETH_SS_STATS:
return BCMGENET_STATS_LEN;
default:
return -EOPNOTSUPP;
}
}
static void bcmgenet_get_strings(struct net_device *dev,
u32 stringset, u8 *data)
{
int i;
switch (stringset) {
case ETH_SS_STATS:
for (i = 0; i < BCMGENET_STATS_LEN; i++) {
memcpy(data + i * ETH_GSTRING_LEN,
bcmgenet_gstrings_stats[i].stat_string,
ETH_GSTRING_LEN);
}
break;
}
}
static void bcmgenet_update_mib_counters(struct bcmgenet_priv *priv)
{
int i, j = 0;
for (i = 0; i < BCMGENET_STATS_LEN; i++) {
const struct bcmgenet_stats *s;
u8 offset = 0;
u32 val = 0;
char *p;
s = &bcmgenet_gstrings_stats[i];
switch (s->type) {
case BCMGENET_STAT_NETDEV:
continue;
case BCMGENET_STAT_MIB_RX:
case BCMGENET_STAT_MIB_TX:
case BCMGENET_STAT_RUNT:
if (s->type != BCMGENET_STAT_MIB_RX)
offset = BCMGENET_STAT_OFFSET;
val = bcmgenet_umac_readl(priv, UMAC_MIB_START +
j + offset);
break;
case BCMGENET_STAT_MISC:
val = bcmgenet_umac_readl(priv, s->reg_offset);
/* clear if overflowed */
if (val == ~0)
bcmgenet_umac_writel(priv, 0, s->reg_offset);
break;
}
j += s->stat_sizeof;
p = (char *)priv + s->stat_offset;
*(u32 *)p = val;
}
}
static void bcmgenet_get_ethtool_stats(struct net_device *dev,
struct ethtool_stats *stats,
u64 *data)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
int i;
if (netif_running(dev))
bcmgenet_update_mib_counters(priv);
for (i = 0; i < BCMGENET_STATS_LEN; i++) {
const struct bcmgenet_stats *s;
char *p;
s = &bcmgenet_gstrings_stats[i];
if (s->type == BCMGENET_STAT_NETDEV)
p = (char *)&dev->stats;
else
p = (char *)priv;
p += s->stat_offset;
data[i] = *(u32 *)p;
}
}
/* standard ethtool support functions. */
static struct ethtool_ops bcmgenet_ethtool_ops = {
.get_strings = bcmgenet_get_strings,
.get_sset_count = bcmgenet_get_sset_count,
.get_ethtool_stats = bcmgenet_get_ethtool_stats,
.get_settings = bcmgenet_get_settings,
.set_settings = bcmgenet_set_settings,
.get_drvinfo = bcmgenet_get_drvinfo,
.get_link = ethtool_op_get_link,
.get_msglevel = bcmgenet_get_msglevel,
.set_msglevel = bcmgenet_set_msglevel,
};
/* Power down the unimac, based on mode. */
static void bcmgenet_power_down(struct bcmgenet_priv *priv,
enum bcmgenet_power_mode mode)
{
u32 reg;
switch (mode) {
case GENET_POWER_CABLE_SENSE:
if (priv->phydev)
phy_detach(priv->phydev);
break;
case GENET_POWER_PASSIVE:
/* Power down LED */
bcmgenet_mii_reset(priv->dev);
if (priv->hw_params->flags & GENET_HAS_EXT) {
reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
reg |= (EXT_PWR_DOWN_PHY |
EXT_PWR_DOWN_DLL | EXT_PWR_DOWN_BIAS);
bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
}
break;
default:
break;
}
}
static void bcmgenet_power_up(struct bcmgenet_priv *priv,
enum bcmgenet_power_mode mode)
{
u32 reg;
if (!(priv->hw_params->flags & GENET_HAS_EXT))
return;
reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
switch (mode) {
case GENET_POWER_PASSIVE:
reg &= ~(EXT_PWR_DOWN_DLL | EXT_PWR_DOWN_PHY |
EXT_PWR_DOWN_BIAS);
/* fallthrough */
case GENET_POWER_CABLE_SENSE:
/* enable APD */
reg |= EXT_PWR_DN_EN_LD;
break;
default:
break;
}
bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
bcmgenet_mii_reset(priv->dev);
}
/* ioctl handle special commands that are not present in ethtool. */
static int bcmgenet_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
int val = 0;
if (!netif_running(dev))
return -EINVAL;
switch (cmd) {
case SIOCGMIIPHY:
case SIOCGMIIREG:
case SIOCSMIIREG:
if (!priv->phydev)
val = -ENODEV;
else
val = phy_mii_ioctl(priv->phydev, rq, cmd);
break;
default:
val = -EINVAL;
break;
}
return val;
}
static struct enet_cb *bcmgenet_get_txcb(struct bcmgenet_priv *priv,
struct bcmgenet_tx_ring *ring)
{
struct enet_cb *tx_cb_ptr;
tx_cb_ptr = ring->cbs;
tx_cb_ptr += ring->write_ptr - ring->cb_ptr;
tx_cb_ptr->bd_addr = priv->tx_bds + ring->write_ptr * DMA_DESC_SIZE;
/* Advancing local write pointer */
if (ring->write_ptr == ring->end_ptr)
ring->write_ptr = ring->cb_ptr;
else
ring->write_ptr++;
return tx_cb_ptr;
}
/* Simple helper to free a control block's resources */
static void bcmgenet_free_cb(struct enet_cb *cb)
{
dev_kfree_skb_any(cb->skb);
cb->skb = NULL;
dma_unmap_addr_set(cb, dma_addr, 0);
}
static inline void bcmgenet_tx_ring16_int_disable(struct bcmgenet_priv *priv,
struct bcmgenet_tx_ring *ring)
{
bcmgenet_intrl2_0_writel(priv,
UMAC_IRQ_TXDMA_BDONE | UMAC_IRQ_TXDMA_PDONE,
INTRL2_CPU_MASK_SET);
}
static inline void bcmgenet_tx_ring16_int_enable(struct bcmgenet_priv *priv,
struct bcmgenet_tx_ring *ring)
{
bcmgenet_intrl2_0_writel(priv,
UMAC_IRQ_TXDMA_BDONE | UMAC_IRQ_TXDMA_PDONE,
INTRL2_CPU_MASK_CLEAR);
}
static inline void bcmgenet_tx_ring_int_enable(struct bcmgenet_priv *priv,
struct bcmgenet_tx_ring *ring)
{
bcmgenet_intrl2_1_writel(priv,
(1 << ring->index), INTRL2_CPU_MASK_CLEAR);
priv->int1_mask &= ~(1 << ring->index);
}
static inline void bcmgenet_tx_ring_int_disable(struct bcmgenet_priv *priv,
struct bcmgenet_tx_ring *ring)
{
bcmgenet_intrl2_1_writel(priv,
(1 << ring->index), INTRL2_CPU_MASK_SET);
priv->int1_mask |= (1 << ring->index);
}
/* Unlocked version of the reclaim routine */
static void __bcmgenet_tx_reclaim(struct net_device *dev,
struct bcmgenet_tx_ring *ring)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
int last_tx_cn, last_c_index, num_tx_bds;
struct enet_cb *tx_cb_ptr;
unsigned int c_index;
/* Compute how many buffers are transmited since last xmit call */
c_index = bcmgenet_tdma_ring_readl(priv, ring->index, TDMA_CONS_INDEX);
last_c_index = ring->c_index;
num_tx_bds = ring->size;
c_index &= (num_tx_bds - 1);
if (c_index >= last_c_index)
last_tx_cn = c_index - last_c_index;
else
last_tx_cn = num_tx_bds - last_c_index + c_index;
netif_dbg(priv, tx_done, dev,
"%s ring=%d index=%d last_tx_cn=%d last_index=%d\n",
__func__, ring->index,
c_index, last_tx_cn, last_c_index);
/* Reclaim transmitted buffers */
while (last_tx_cn-- > 0) {
tx_cb_ptr = ring->cbs + last_c_index;
if (tx_cb_ptr->skb) {
dev->stats.tx_bytes += tx_cb_ptr->skb->len;
dma_unmap_single(&dev->dev,
dma_unmap_addr(tx_cb_ptr, dma_addr),
tx_cb_ptr->skb->len,
DMA_TO_DEVICE);
bcmgenet_free_cb(tx_cb_ptr);
} else if (dma_unmap_addr(tx_cb_ptr, dma_addr)) {
dev->stats.tx_bytes +=
dma_unmap_len(tx_cb_ptr, dma_len);
dma_unmap_page(&dev->dev,
dma_unmap_addr(tx_cb_ptr, dma_addr),
dma_unmap_len(tx_cb_ptr, dma_len),
DMA_TO_DEVICE);
dma_unmap_addr_set(tx_cb_ptr, dma_addr, 0);
}
dev->stats.tx_packets++;
ring->free_bds += 1;
last_c_index++;
last_c_index &= (num_tx_bds - 1);
}
if (ring->free_bds > (MAX_SKB_FRAGS + 1))
ring->int_disable(priv, ring);
if (__netif_subqueue_stopped(dev, ring->queue))
netif_wake_subqueue(dev, ring->queue);
ring->c_index = c_index;
}
static void bcmgenet_tx_reclaim(struct net_device *dev,
struct bcmgenet_tx_ring *ring)
{
unsigned long flags;
spin_lock_irqsave(&ring->lock, flags);
__bcmgenet_tx_reclaim(dev, ring);
spin_unlock_irqrestore(&ring->lock, flags);
}
static void bcmgenet_tx_reclaim_all(struct net_device *dev)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
int i;
if (netif_is_multiqueue(dev)) {
for (i = 0; i < priv->hw_params->tx_queues; i++)
bcmgenet_tx_reclaim(dev, &priv->tx_rings[i]);
}
bcmgenet_tx_reclaim(dev, &priv->tx_rings[DESC_INDEX]);
}
/* Transmits a single SKB (either head of a fragment or a single SKB)
* caller must hold priv->lock
*/
static int bcmgenet_xmit_single(struct net_device *dev,
struct sk_buff *skb,
u16 dma_desc_flags,
struct bcmgenet_tx_ring *ring)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
struct device *kdev = &priv->pdev->dev;
struct enet_cb *tx_cb_ptr;
unsigned int skb_len;
dma_addr_t mapping;
u32 length_status;
int ret;
tx_cb_ptr = bcmgenet_get_txcb(priv, ring);
if (unlikely(!tx_cb_ptr))
BUG();
tx_cb_ptr->skb = skb;
skb_len = skb_headlen(skb) < ETH_ZLEN ? ETH_ZLEN : skb_headlen(skb);
mapping = dma_map_single(kdev, skb->data, skb_len, DMA_TO_DEVICE);
ret = dma_mapping_error(kdev, mapping);
if (ret) {
netif_err(priv, tx_err, dev, "Tx DMA map failed\n");
dev_kfree_skb(skb);
return ret;
}
dma_unmap_addr_set(tx_cb_ptr, dma_addr, mapping);
dma_unmap_len_set(tx_cb_ptr, dma_len, skb->len);
length_status = (skb_len << DMA_BUFLENGTH_SHIFT) | dma_desc_flags |
(priv->hw_params->qtag_mask << DMA_TX_QTAG_SHIFT) |
DMA_TX_APPEND_CRC;
if (skb->ip_summed == CHECKSUM_PARTIAL)
length_status |= DMA_TX_DO_CSUM;
dmadesc_set(priv, tx_cb_ptr->bd_addr, mapping, length_status);
/* Decrement total BD count and advance our write pointer */
ring->free_bds -= 1;
ring->prod_index += 1;
ring->prod_index &= DMA_P_INDEX_MASK;
return 0;
}
/* Transmit a SKB fragement */
static int bcmgenet_xmit_frag(struct net_device *dev,
skb_frag_t *frag,
u16 dma_desc_flags,
struct bcmgenet_tx_ring *ring)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
struct device *kdev = &priv->pdev->dev;
struct enet_cb *tx_cb_ptr;
dma_addr_t mapping;
int ret;
tx_cb_ptr = bcmgenet_get_txcb(priv, ring);
if (unlikely(!tx_cb_ptr))
BUG();
tx_cb_ptr->skb = NULL;
mapping = skb_frag_dma_map(kdev, frag, 0,
skb_frag_size(frag), DMA_TO_DEVICE);
ret = dma_mapping_error(kdev, mapping);
if (ret) {
netif_err(priv, tx_err, dev, "%s: Tx DMA map failed\n",
__func__);
return ret;
}
dma_unmap_addr_set(tx_cb_ptr, dma_addr, mapping);
dma_unmap_len_set(tx_cb_ptr, dma_len, frag->size);
dmadesc_set(priv, tx_cb_ptr->bd_addr, mapping,
(frag->size << DMA_BUFLENGTH_SHIFT) | dma_desc_flags |
(priv->hw_params->qtag_mask << DMA_TX_QTAG_SHIFT));
ring->free_bds -= 1;
ring->prod_index += 1;
ring->prod_index &= DMA_P_INDEX_MASK;
return 0;
}
/* Reallocate the SKB to put enough headroom in front of it and insert
* the transmit checksum offsets in the descriptors
*/
static int bcmgenet_put_tx_csum(struct net_device *dev, struct sk_buff *skb)
{
struct status_64 *status = NULL;
struct sk_buff *new_skb;
u16 offset;
u8 ip_proto;
u16 ip_ver;
u32 tx_csum_info;
if (unlikely(skb_headroom(skb) < sizeof(*status))) {
/* If 64 byte status block enabled, must make sure skb has
* enough headroom for us to insert 64B status block.
*/
new_skb = skb_realloc_headroom(skb, sizeof(*status));
dev_kfree_skb(skb);
if (!new_skb) {
dev->stats.tx_errors++;
dev->stats.tx_dropped++;
return -ENOMEM;
}
skb = new_skb;
}
skb_push(skb, sizeof(*status));
status = (struct status_64 *)skb->data;
if (skb->ip_summed == CHECKSUM_PARTIAL) {
ip_ver = htons(skb->protocol);
switch (ip_ver) {
case ETH_P_IP:
ip_proto = ip_hdr(skb)->protocol;
break;
case ETH_P_IPV6:
ip_proto = ipv6_hdr(skb)->nexthdr;
break;
default:
return 0;
}
offset = skb_checksum_start_offset(skb) - sizeof(*status);
tx_csum_info = (offset << STATUS_TX_CSUM_START_SHIFT) |
(offset + skb->csum_offset);
/* Set the length valid bit for TCP and UDP and just set
* the special UDP flag for IPv4, else just set to 0.
*/
if (ip_proto == IPPROTO_TCP || ip_proto == IPPROTO_UDP) {
tx_csum_info |= STATUS_TX_CSUM_LV;
if (ip_proto == IPPROTO_UDP && ip_ver == ETH_P_IP)
tx_csum_info |= STATUS_TX_CSUM_PROTO_UDP;
} else
tx_csum_info = 0;
status->tx_csum_info = tx_csum_info;
}
return 0;
}
static netdev_tx_t bcmgenet_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
struct bcmgenet_tx_ring *ring = NULL;
unsigned long flags = 0;
int nr_frags, index;
u16 dma_desc_flags;
int ret;
int i;
index = skb_get_queue_mapping(skb);
/* Mapping strategy:
* queue_mapping = 0, unclassified, packet xmited through ring16
* queue_mapping = 1, goes to ring 0. (highest priority queue
* queue_mapping = 2, goes to ring 1.
* queue_mapping = 3, goes to ring 2.
* queue_mapping = 4, goes to ring 3.
*/
if (index == 0)
index = DESC_INDEX;
else
index -= 1;
if ((index != DESC_INDEX) && (index > priv->hw_params->tx_queues - 1)) {
netdev_err(dev, "%s: queue_mapping %d is invalid\n",
__func__, skb_get_queue_mapping(skb));
dev->stats.tx_errors++;
dev->stats.tx_dropped++;
ret = NETDEV_TX_OK;
goto out;
}
nr_frags = skb_shinfo(skb)->nr_frags;
ring = &priv->tx_rings[index];
spin_lock_irqsave(&ring->lock, flags);
if (ring->free_bds <= nr_frags + 1) {
netif_stop_subqueue(dev, ring->queue);
netdev_err(dev, "%s: tx ring %d full when queue %d awake\n",
__func__, index, ring->queue);
ret = NETDEV_TX_BUSY;
goto out;
}
/* reclaim xmited skb every 8 packets. */
/*if (ring->free_bds < ring->size - 8)*/
/*__bcmgenet_tx_reclaim(dev, ring);*/
/* set the SKB transmit checksum */
if (priv->desc_64b_en) {
ret = bcmgenet_put_tx_csum(dev, skb);
if (ret) {
ret = NETDEV_TX_OK;
goto out;
}
}
dma_desc_flags = DMA_SOP;
if (nr_frags == 0)
dma_desc_flags |= DMA_EOP;
/* Transmit single SKB or head of fragment list */
ret = bcmgenet_xmit_single(dev, skb, dma_desc_flags, ring);
if (ret) {
ret = NETDEV_TX_OK;
goto out;
}
/* xmit fragment */
for (i = 0; i < nr_frags; i++) {
ret = bcmgenet_xmit_frag(dev,
&skb_shinfo(skb)->frags[i],
(i == nr_frags - 1) ? DMA_EOP : 0, ring);
if (ret) {
ret = NETDEV_TX_OK;
goto out;
}
}
/* we kept a software copy of how much we should advance the TDMA
* producer index, now write it down to the hardware
*/
bcmgenet_tdma_ring_writel(priv, ring->index,
ring->prod_index, TDMA_PROD_INDEX);
if (ring->free_bds <= (MAX_SKB_FRAGS + 1)) {
netif_stop_subqueue(dev, ring->queue);
ring->int_enable(priv, ring);
}
out:
spin_unlock_irqrestore(&ring->lock, flags);
return ret;
}
static int bcmgenet_rx_refill(struct bcmgenet_priv *priv,
struct enet_cb *cb)
{
struct device *kdev = &priv->pdev->dev;
struct sk_buff *skb;
dma_addr_t mapping;
int ret;
skb = netdev_alloc_skb(priv->dev,
priv->rx_buf_len + SKB_ALIGNMENT);
if (!skb)
return -ENOMEM;
/* a caller did not release this control block */
WARN_ON(cb->skb != NULL);
cb->skb = skb;
mapping = dma_map_single(kdev, skb->data,
priv->rx_buf_len, DMA_FROM_DEVICE);
ret = dma_mapping_error(kdev, mapping);
if (ret) {
bcmgenet_free_cb(cb);
netif_err(priv, rx_err, priv->dev,
"%s DMA map failed\n", __func__);
return ret;
}
dma_unmap_addr_set(cb, dma_addr, mapping);
/* assign packet, prepare descriptor, and advance pointer */
dmadesc_set_addr(priv, priv->rx_bd_assign_ptr, mapping);
/* turn on the newly assigned BD for DMA to use */
priv->rx_bd_assign_index++;
priv->rx_bd_assign_index &= (priv->num_rx_bds - 1);
priv->rx_bd_assign_ptr = priv->rx_bds +
(priv->rx_bd_assign_index * DMA_DESC_SIZE);
return 0;
}
/* bcmgenet_desc_rx - descriptor based rx process.
* this could be called from bottom half, or from NAPI polling method.
*/
static unsigned int bcmgenet_desc_rx(struct bcmgenet_priv *priv,
unsigned int budget)
{
struct net_device *dev = priv->dev;
struct enet_cb *cb;
struct sk_buff *skb;
u32 dma_length_status;
unsigned long dma_flag;
int len, err;
unsigned int rxpktprocessed = 0, rxpkttoprocess;
unsigned int p_index;
unsigned int chksum_ok = 0;
p_index = bcmgenet_rdma_ring_readl(priv,
DESC_INDEX, RDMA_PROD_INDEX);
p_index &= DMA_P_INDEX_MASK;
if (p_index < priv->rx_c_index)
rxpkttoprocess = (DMA_C_INDEX_MASK + 1) -
priv->rx_c_index + p_index;
else
rxpkttoprocess = p_index - priv->rx_c_index;
netif_dbg(priv, rx_status, dev,
"RDMA: rxpkttoprocess=%d\n", rxpkttoprocess);
while ((rxpktprocessed < rxpkttoprocess) &&
(rxpktprocessed < budget)) {
/* Unmap the packet contents such that we can use the
* RSV from the 64 bytes descriptor when enabled and save
* a 32-bits register read
*/
cb = &priv->rx_cbs[priv->rx_read_ptr];
skb = cb->skb;
dma_unmap_single(&dev->dev, dma_unmap_addr(cb, dma_addr),
priv->rx_buf_len, DMA_FROM_DEVICE);
if (!priv->desc_64b_en) {
dma_length_status = dmadesc_get_length_status(priv,
priv->rx_bds +
(priv->rx_read_ptr *
DMA_DESC_SIZE));
} else {
struct status_64 *status;
status = (struct status_64 *)skb->data;
dma_length_status = status->length_status;
}
/* DMA flags and length are still valid no matter how
* we got the Receive Status Vector (64B RSB or register)
*/
dma_flag = dma_length_status & 0xffff;
len = dma_length_status >> DMA_BUFLENGTH_SHIFT;
netif_dbg(priv, rx_status, dev,
"%s: p_ind=%d c_ind=%d read_ptr=%d len_stat=0x%08x\n",
__func__, p_index, priv->rx_c_index, priv->rx_read_ptr,
dma_length_status);
rxpktprocessed++;
priv->rx_read_ptr++;
priv->rx_read_ptr &= (priv->num_rx_bds - 1);
/* out of memory, just drop packets at the hardware level */
if (unlikely(!skb)) {
dev->stats.rx_dropped++;
dev->stats.rx_errors++;
goto refill;
}
if (unlikely(!(dma_flag & DMA_EOP) || !(dma_flag & DMA_SOP))) {
netif_err(priv, rx_status, dev,
"Droping fragmented packet!\n");
dev->stats.rx_dropped++;
dev->stats.rx_errors++;
dev_kfree_skb_any(cb->skb);
cb->skb = NULL;
goto refill;
}
/* report errors */
if (unlikely(dma_flag & (DMA_RX_CRC_ERROR |
DMA_RX_OV |
DMA_RX_NO |
DMA_RX_LG |
DMA_RX_RXER))) {
netif_err(priv, rx_status, dev, "dma_flag=0x%x\n",
(unsigned int)dma_flag);
if (dma_flag & DMA_RX_CRC_ERROR)
dev->stats.rx_crc_errors++;
if (dma_flag & DMA_RX_OV)
dev->stats.rx_over_errors++;
if (dma_flag & DMA_RX_NO)
dev->stats.rx_frame_errors++;
if (dma_flag & DMA_RX_LG)
dev->stats.rx_length_errors++;
dev->stats.rx_dropped++;
dev->stats.rx_errors++;
/* discard the packet and advance consumer index.*/
dev_kfree_skb_any(cb->skb);
cb->skb = NULL;
goto refill;
} /* error packet */
chksum_ok = (dma_flag & priv->dma_rx_chk_bit) &&
priv->desc_rxchk_en;
skb_put(skb, len);
if (priv->desc_64b_en) {
skb_pull(skb, 64);
len -= 64;
}
if (likely(chksum_ok))
skb->ip_summed = CHECKSUM_UNNECESSARY;
/* remove hardware 2bytes added for IP alignment */
skb_pull(skb, 2);
len -= 2;
if (priv->crc_fwd_en) {
skb_trim(skb, len - ETH_FCS_LEN);
len -= ETH_FCS_LEN;
}
/*Finish setting up the received SKB and send it to the kernel*/
skb->protocol = eth_type_trans(skb, priv->dev);
dev->stats.rx_packets++;
dev->stats.rx_bytes += len;
if (dma_flag & DMA_RX_MULT)
dev->stats.multicast++;
/* Notify kernel */
napi_gro_receive(&priv->napi, skb);
cb->skb = NULL;
netif_dbg(priv, rx_status, dev, "pushed up to kernel\n");
/* refill RX path on the current control block */
refill:
err = bcmgenet_rx_refill(priv, cb);
if (err)
netif_err(priv, rx_err, dev, "Rx refill failed\n");
}
return rxpktprocessed;
}
/* Assign skb to RX DMA descriptor. */
static int bcmgenet_alloc_rx_buffers(struct bcmgenet_priv *priv)
{
struct enet_cb *cb;
int ret = 0;
int i;
netif_dbg(priv, hw, priv->dev, "%s:\n", __func__);
/* loop here for each buffer needing assign */
for (i = 0; i < priv->num_rx_bds; i++) {
cb = &priv->rx_cbs[priv->rx_bd_assign_index];
if (cb->skb)
continue;
/* set the DMA descriptor length once and for all
* it will only change if we support dynamically sizing
* priv->rx_buf_len, but we do not
*/
dmadesc_set_length_status(priv, priv->rx_bd_assign_ptr,
priv->rx_buf_len << DMA_BUFLENGTH_SHIFT);
ret = bcmgenet_rx_refill(priv, cb);
if (ret)
break;
}
return ret;
}
static void bcmgenet_free_rx_buffers(struct bcmgenet_priv *priv)
{
struct enet_cb *cb;
int i;
for (i = 0; i < priv->num_rx_bds; i++) {
cb = &priv->rx_cbs[i];
if (dma_unmap_addr(cb, dma_addr)) {
dma_unmap_single(&priv->dev->dev,
dma_unmap_addr(cb, dma_addr),
priv->rx_buf_len, DMA_FROM_DEVICE);
dma_unmap_addr_set(cb, dma_addr, 0);
}
if (cb->skb)
bcmgenet_free_cb(cb);
}
}
static int reset_umac(struct bcmgenet_priv *priv)
{
struct device *kdev = &priv->pdev->dev;
unsigned int timeout = 0;
u32 reg;
/* 7358a0/7552a0: bad default in RBUF_FLUSH_CTRL.umac_sw_rst */
bcmgenet_rbuf_ctrl_set(priv, 0);
udelay(10);
/* disable MAC while updating its registers */
bcmgenet_umac_writel(priv, 0, UMAC_CMD);
/* issue soft reset, wait for it to complete */
bcmgenet_umac_writel(priv, CMD_SW_RESET, UMAC_CMD);
while (timeout++ < 1000) {
reg = bcmgenet_umac_readl(priv, UMAC_CMD);
if (!(reg & CMD_SW_RESET))
return 0;
udelay(1);
}
if (timeout == 1000) {
dev_err(kdev,
"timeout waiting for MAC to come out of resetn\n");
return -ETIMEDOUT;
}
return 0;
}
static int init_umac(struct bcmgenet_priv *priv)
{
struct device *kdev = &priv->pdev->dev;
int ret;
u32 reg, cpu_mask_clear;
dev_dbg(&priv->pdev->dev, "bcmgenet: init_umac\n");
ret = reset_umac(priv);
if (ret)
return ret;
bcmgenet_umac_writel(priv, 0, UMAC_CMD);
/* clear tx/rx counter */
bcmgenet_umac_writel(priv,
MIB_RESET_RX | MIB_RESET_TX | MIB_RESET_RUNT, UMAC_MIB_CTRL);
bcmgenet_umac_writel(priv, 0, UMAC_MIB_CTRL);
bcmgenet_umac_writel(priv, ENET_MAX_MTU_SIZE, UMAC_MAX_FRAME_LEN);
/* init rx registers, enable ip header optimization */
reg = bcmgenet_rbuf_readl(priv, RBUF_CTRL);
reg |= RBUF_ALIGN_2B;
bcmgenet_rbuf_writel(priv, reg, RBUF_CTRL);
if (!GENET_IS_V1(priv) && !GENET_IS_V2(priv))
bcmgenet_rbuf_writel(priv, 1, RBUF_TBUF_SIZE_CTRL);
/* Mask all interrupts.*/
bcmgenet_intrl2_0_writel(priv, 0xFFFFFFFF, INTRL2_CPU_MASK_SET);
bcmgenet_intrl2_0_writel(priv, 0xFFFFFFFF, INTRL2_CPU_CLEAR);
bcmgenet_intrl2_0_writel(priv, 0, INTRL2_CPU_MASK_CLEAR);
cpu_mask_clear = UMAC_IRQ_RXDMA_BDONE;
dev_dbg(kdev, "%s:Enabling RXDMA_BDONE interrupt\n", __func__);
/* Monitor cable plug/unpluged event for internal PHY */
if (phy_is_internal(priv->phydev))
cpu_mask_clear |= (UMAC_IRQ_LINK_DOWN | UMAC_IRQ_LINK_UP);
else if (priv->ext_phy)
cpu_mask_clear |= (UMAC_IRQ_LINK_DOWN | UMAC_IRQ_LINK_UP);
else if (priv->phy_interface == PHY_INTERFACE_MODE_MOCA) {
reg = bcmgenet_bp_mc_get(priv);
reg |= BIT(priv->hw_params->bp_in_en_shift);
/* bp_mask: back pressure mask */
if (netif_is_multiqueue(priv->dev))
reg |= priv->hw_params->bp_in_mask;
else
reg &= ~priv->hw_params->bp_in_mask;
bcmgenet_bp_mc_set(priv, reg);
}
/* Enable MDIO interrupts on GENET v3+ */
if (priv->hw_params->flags & GENET_HAS_MDIO_INTR)
cpu_mask_clear |= UMAC_IRQ_MDIO_DONE | UMAC_IRQ_MDIO_ERROR;
bcmgenet_intrl2_0_writel(priv, cpu_mask_clear,
INTRL2_CPU_MASK_CLEAR);
/* Enable rx/tx engine.*/
dev_dbg(kdev, "done init umac\n");
return 0;
}
/* Initialize all house-keeping variables for a TX ring, along
* with corresponding hardware registers
*/
static void bcmgenet_init_tx_ring(struct bcmgenet_priv *priv,
unsigned int index, unsigned int size,
unsigned int write_ptr, unsigned int end_ptr)
{
struct bcmgenet_tx_ring *ring = &priv->tx_rings[index];
u32 words_per_bd = WORDS_PER_BD(priv);
u32 flow_period_val = 0;
unsigned int first_bd;
spin_lock_init(&ring->lock);
ring->index = index;
if (index == DESC_INDEX) {
ring->queue = 0;
ring->int_enable = bcmgenet_tx_ring16_int_enable;
ring->int_disable = bcmgenet_tx_ring16_int_disable;
} else {
ring->queue = index + 1;
ring->int_enable = bcmgenet_tx_ring_int_enable;
ring->int_disable = bcmgenet_tx_ring_int_disable;
}
ring->cbs = priv->tx_cbs + write_ptr;
ring->size = size;
ring->c_index = 0;
ring->free_bds = size;
ring->write_ptr = write_ptr;
ring->cb_ptr = write_ptr;
ring->end_ptr = end_ptr - 1;
ring->prod_index = 0;
/* Set flow period for ring != 16 */
if (index != DESC_INDEX)
flow_period_val = ENET_MAX_MTU_SIZE << 16;
bcmgenet_tdma_ring_writel(priv, index, 0, TDMA_PROD_INDEX);
bcmgenet_tdma_ring_writel(priv, index, 0, TDMA_CONS_INDEX);
bcmgenet_tdma_ring_writel(priv, index, 1, DMA_MBUF_DONE_THRESH);
/* Disable rate control for now */
bcmgenet_tdma_ring_writel(priv, index, flow_period_val,
TDMA_FLOW_PERIOD);
/* Unclassified traffic goes to ring 16 */
bcmgenet_tdma_ring_writel(priv, index,
((size << DMA_RING_SIZE_SHIFT) | RX_BUF_LENGTH),
DMA_RING_BUF_SIZE);
first_bd = write_ptr;
/* Set start and end address, read and write pointers */
bcmgenet_tdma_ring_writel(priv, index, first_bd * words_per_bd,
DMA_START_ADDR);
bcmgenet_tdma_ring_writel(priv, index, first_bd * words_per_bd,
TDMA_READ_PTR);
bcmgenet_tdma_ring_writel(priv, index, first_bd,
TDMA_WRITE_PTR);
bcmgenet_tdma_ring_writel(priv, index, end_ptr * words_per_bd - 1,
DMA_END_ADDR);
}
/* Initialize a RDMA ring */
static int bcmgenet_init_rx_ring(struct bcmgenet_priv *priv,
unsigned int index, unsigned int size)
{
u32 words_per_bd = WORDS_PER_BD(priv);
int ret;
priv->num_rx_bds = TOTAL_DESC;
priv->rx_bds = priv->base + priv->hw_params->rdma_offset;
priv->rx_bd_assign_ptr = priv->rx_bds;
priv->rx_bd_assign_index = 0;
priv->rx_c_index = 0;
priv->rx_read_ptr = 0;
priv->rx_cbs = kzalloc(priv->num_rx_bds * sizeof(struct enet_cb),
GFP_KERNEL);
if (!priv->rx_cbs)
return -ENOMEM;
ret = bcmgenet_alloc_rx_buffers(priv);
if (ret) {
kfree(priv->rx_cbs);
return ret;
}
bcmgenet_rdma_ring_writel(priv, index, 0, RDMA_WRITE_PTR);
bcmgenet_rdma_ring_writel(priv, index, 0, RDMA_PROD_INDEX);
bcmgenet_rdma_ring_writel(priv, index, 0, RDMA_CONS_INDEX);
bcmgenet_rdma_ring_writel(priv, index,
((size << DMA_RING_SIZE_SHIFT) | RX_BUF_LENGTH),
DMA_RING_BUF_SIZE);
bcmgenet_rdma_ring_writel(priv, index, 0, DMA_START_ADDR);
bcmgenet_rdma_ring_writel(priv, index,
words_per_bd * size - 1, DMA_END_ADDR);
bcmgenet_rdma_ring_writel(priv, index,
(DMA_FC_THRESH_LO << DMA_XOFF_THRESHOLD_SHIFT) |
DMA_FC_THRESH_HI, RDMA_XON_XOFF_THRESH);
bcmgenet_rdma_ring_writel(priv, index, 0, RDMA_READ_PTR);
return ret;
}
/* init multi xmit queues, only available for GENET2+
* the queue is partitioned as follows:
*
* queue 0 - 3 is priority based, each one has 32 descriptors,
* with queue 0 being the highest priority queue.
*
* queue 16 is the default tx queue with GENET_DEFAULT_BD_CNT
* descriptors: 256 - (number of tx queues * bds per queues) = 128
* descriptors.
*
* The transmit control block pool is then partitioned as following:
* - tx_cbs[0...127] are for queue 16
* - tx_ring_cbs[0] points to tx_cbs[128..159]
* - tx_ring_cbs[1] points to tx_cbs[160..191]
* - tx_ring_cbs[2] points to tx_cbs[192..223]
* - tx_ring_cbs[3] points to tx_cbs[224..255]
*/
static void bcmgenet_init_multiq(struct net_device *dev)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
unsigned int i, dma_enable;
u32 reg, dma_ctrl, ring_cfg = 0, dma_priority = 0;
if (!netif_is_multiqueue(dev)) {
netdev_warn(dev, "called with non multi queue aware HW\n");
return;
}
dma_ctrl = bcmgenet_tdma_readl(priv, DMA_CTRL);
dma_enable = dma_ctrl & DMA_EN;
dma_ctrl &= ~DMA_EN;
bcmgenet_tdma_writel(priv, dma_ctrl, DMA_CTRL);
/* Enable strict priority arbiter mode */
bcmgenet_tdma_writel(priv, DMA_ARBITER_SP, DMA_ARB_CTRL);
for (i = 0; i < priv->hw_params->tx_queues; i++) {
/* first 64 tx_cbs are reserved for default tx queue
* (ring 16)
*/
bcmgenet_init_tx_ring(priv, i, priv->hw_params->bds_cnt,
i * priv->hw_params->bds_cnt,
(i + 1) * priv->hw_params->bds_cnt);
/* Configure ring as decriptor ring and setup priority */
ring_cfg |= 1 << i;
dma_priority |= ((GENET_Q0_PRIORITY + i) <<
(GENET_MAX_MQ_CNT + 1) * i);
dma_ctrl |= 1 << (i + DMA_RING_BUF_EN_SHIFT);
}
/* Enable rings */
reg = bcmgenet_tdma_readl(priv, DMA_RING_CFG);
reg |= ring_cfg;
bcmgenet_tdma_writel(priv, reg, DMA_RING_CFG);
/* Use configured rings priority and set ring #16 priority */
reg = bcmgenet_tdma_readl(priv, DMA_RING_PRIORITY);
reg |= ((GENET_Q0_PRIORITY + priv->hw_params->tx_queues) << 20);
reg |= dma_priority;
bcmgenet_tdma_writel(priv, reg, DMA_PRIORITY);
/* Configure ring as descriptor ring and re-enable DMA if enabled */
reg = bcmgenet_tdma_readl(priv, DMA_CTRL);
reg |= dma_ctrl;
if (dma_enable)
reg |= DMA_EN;
bcmgenet_tdma_writel(priv, reg, DMA_CTRL);
}
static void bcmgenet_fini_dma(struct bcmgenet_priv *priv)
{
int i;
/* disable DMA */
bcmgenet_rdma_writel(priv, 0, DMA_CTRL);
bcmgenet_tdma_writel(priv, 0, DMA_CTRL);
for (i = 0; i < priv->num_tx_bds; i++) {
if (priv->tx_cbs[i].skb != NULL) {
dev_kfree_skb(priv->tx_cbs[i].skb);
priv->tx_cbs[i].skb = NULL;
}
}
bcmgenet_free_rx_buffers(priv);
kfree(priv->rx_cbs);
kfree(priv->tx_cbs);
}
/* init_edma: Initialize DMA control register */
static int bcmgenet_init_dma(struct bcmgenet_priv *priv)
{
int ret;
netif_dbg(priv, hw, priv->dev, "bcmgenet: init_edma\n");
/* by default, enable ring 16 (descriptor based) */
ret = bcmgenet_init_rx_ring(priv, DESC_INDEX, TOTAL_DESC);
if (ret) {
netdev_err(priv->dev, "failed to initialize RX ring\n");
return ret;
}
/* init rDma */
bcmgenet_rdma_writel(priv, DMA_MAX_BURST_LENGTH, DMA_SCB_BURST_SIZE);
/* Init tDma */
bcmgenet_tdma_writel(priv, DMA_MAX_BURST_LENGTH, DMA_SCB_BURST_SIZE);
/* Initialize commont TX ring structures */
priv->tx_bds = priv->base + priv->hw_params->tdma_offset;
priv->num_tx_bds = TOTAL_DESC;
priv->tx_cbs = kzalloc(priv->num_tx_bds * sizeof(struct enet_cb),
GFP_KERNEL);
if (!priv->tx_cbs) {
bcmgenet_fini_dma(priv);
return -ENOMEM;
}
/* initialize multi xmit queue */
bcmgenet_init_multiq(priv->dev);
/* initialize special ring 16 */
bcmgenet_init_tx_ring(priv, DESC_INDEX, GENET_DEFAULT_BD_CNT,
priv->hw_params->tx_queues * priv->hw_params->bds_cnt,
TOTAL_DESC);
return 0;
}
/* NAPI polling method*/
static int bcmgenet_poll(struct napi_struct *napi, int budget)
{
struct bcmgenet_priv *priv = container_of(napi,
struct bcmgenet_priv, napi);
unsigned int work_done;
/* tx reclaim */
bcmgenet_tx_reclaim(priv->dev, &priv->tx_rings[DESC_INDEX]);
work_done = bcmgenet_desc_rx(priv, budget);
/* Advancing our consumer index*/
priv->rx_c_index += work_done;
priv->rx_c_index &= DMA_C_INDEX_MASK;
bcmgenet_rdma_ring_writel(priv, DESC_INDEX,
priv->rx_c_index, RDMA_CONS_INDEX);
if (work_done < budget) {
napi_complete(napi);
bcmgenet_intrl2_0_writel(priv,
UMAC_IRQ_RXDMA_BDONE, INTRL2_CPU_MASK_CLEAR);
}
return work_done;
}
/* Interrupt bottom half */
static void bcmgenet_irq_task(struct work_struct *work)
{
struct bcmgenet_priv *priv = container_of(
work, struct bcmgenet_priv, bcmgenet_irq_work);
netif_dbg(priv, intr, priv->dev, "%s\n", __func__);
/* Link UP/DOWN event */
if ((priv->hw_params->flags & GENET_HAS_MDIO_INTR) &&
(priv->irq0_stat & (UMAC_IRQ_LINK_UP|UMAC_IRQ_LINK_DOWN))) {
if (priv->phydev)
phy_mac_interrupt(priv->phydev,
(priv->irq0_stat & UMAC_IRQ_LINK_UP));
priv->irq0_stat &= ~(UMAC_IRQ_LINK_UP|UMAC_IRQ_LINK_DOWN);
}
}
/* bcmgenet_isr1: interrupt handler for ring buffer. */
static irqreturn_t bcmgenet_isr1(int irq, void *dev_id)
{
struct bcmgenet_priv *priv = dev_id;
unsigned int index;
/* Save irq status for bottom-half processing. */
priv->irq1_stat =
bcmgenet_intrl2_1_readl(priv, INTRL2_CPU_STAT) &
~priv->int1_mask;
/* clear inerrupts*/
bcmgenet_intrl2_1_writel(priv, priv->irq1_stat, INTRL2_CPU_CLEAR);
netif_dbg(priv, intr, priv->dev,
"%s: IRQ=0x%x\n", __func__, priv->irq1_stat);
/* Check the MBDONE interrupts.
* packet is done, reclaim descriptors
*/
if (priv->irq1_stat & 0x0000ffff) {
index = 0;
for (index = 0; index < 16; index++) {
if (priv->irq1_stat & (1 << index))
bcmgenet_tx_reclaim(priv->dev,
&priv->tx_rings[index]);
}
}
return IRQ_HANDLED;
}
/* bcmgenet_isr0: Handle various interrupts. */
static irqreturn_t bcmgenet_isr0(int irq, void *dev_id)
{
struct bcmgenet_priv *priv = dev_id;
/* Save irq status for bottom-half processing. */
priv->irq0_stat =
bcmgenet_intrl2_0_readl(priv, INTRL2_CPU_STAT) &
~bcmgenet_intrl2_0_readl(priv, INTRL2_CPU_MASK_STATUS);
/* clear inerrupts*/
bcmgenet_intrl2_0_writel(priv, priv->irq0_stat, INTRL2_CPU_CLEAR);
netif_dbg(priv, intr, priv->dev,
"IRQ=0x%x\n", priv->irq0_stat);
if (priv->irq0_stat & (UMAC_IRQ_RXDMA_BDONE | UMAC_IRQ_RXDMA_PDONE)) {
/* We use NAPI(software interrupt throttling, if
* Rx Descriptor throttling is not used.
* Disable interrupt, will be enabled in the poll method.
*/
if (likely(napi_schedule_prep(&priv->napi))) {
bcmgenet_intrl2_0_writel(priv,
UMAC_IRQ_RXDMA_BDONE, INTRL2_CPU_MASK_SET);
__napi_schedule(&priv->napi);
}
}
if (priv->irq0_stat &
(UMAC_IRQ_TXDMA_BDONE | UMAC_IRQ_TXDMA_PDONE)) {
/* Tx reclaim */
bcmgenet_tx_reclaim(priv->dev, &priv->tx_rings[DESC_INDEX]);
}
if (priv->irq0_stat & (UMAC_IRQ_PHY_DET_R |
UMAC_IRQ_PHY_DET_F |
UMAC_IRQ_LINK_UP |
UMAC_IRQ_LINK_DOWN |
UMAC_IRQ_HFB_SM |
UMAC_IRQ_HFB_MM |
UMAC_IRQ_MPD_R)) {
/* all other interested interrupts handled in bottom half */
schedule_work(&priv->bcmgenet_irq_work);
}
if ((priv->hw_params->flags & GENET_HAS_MDIO_INTR) &&
priv->irq0_stat & (UMAC_IRQ_MDIO_DONE | UMAC_IRQ_MDIO_ERROR)) {
priv->irq0_stat &= ~(UMAC_IRQ_MDIO_DONE | UMAC_IRQ_MDIO_ERROR);
wake_up(&priv->wq);
}
return IRQ_HANDLED;
}
static void bcmgenet_umac_reset(struct bcmgenet_priv *priv)
{
u32 reg;
reg = bcmgenet_rbuf_ctrl_get(priv);
reg |= BIT(1);
bcmgenet_rbuf_ctrl_set(priv, reg);
udelay(10);
reg &= ~BIT(1);
bcmgenet_rbuf_ctrl_set(priv, reg);
udelay(10);
}
static void bcmgenet_set_hw_addr(struct bcmgenet_priv *priv,
unsigned char *addr)
{
bcmgenet_umac_writel(priv, (addr[0] << 24) | (addr[1] << 16) |
(addr[2] << 8) | addr[3], UMAC_MAC0);
bcmgenet_umac_writel(priv, (addr[4] << 8) | addr[5], UMAC_MAC1);
}
static int bcmgenet_wol_resume(struct bcmgenet_priv *priv)
{
int ret;
/* From WOL-enabled suspend, switch to regular clock */
clk_disable(priv->clk_wol);
/* init umac registers to synchronize s/w with h/w */
ret = init_umac(priv);
if (ret)
return ret;
if (priv->phydev)
phy_init_hw(priv->phydev);
/* Speed settings must be restored */
bcmgenet_mii_config(priv->dev);
return 0;
}
/* Returns a reusable dma control register value */
static u32 bcmgenet_dma_disable(struct bcmgenet_priv *priv)
{
u32 reg;
u32 dma_ctrl;
/* disable DMA */
dma_ctrl = 1 << (DESC_INDEX + DMA_RING_BUF_EN_SHIFT) | DMA_EN;
reg = bcmgenet_tdma_readl(priv, DMA_CTRL);
reg &= ~dma_ctrl;
bcmgenet_tdma_writel(priv, reg, DMA_CTRL);
reg = bcmgenet_rdma_readl(priv, DMA_CTRL);
reg &= ~dma_ctrl;
bcmgenet_rdma_writel(priv, reg, DMA_CTRL);
bcmgenet_umac_writel(priv, 1, UMAC_TX_FLUSH);
udelay(10);
bcmgenet_umac_writel(priv, 0, UMAC_TX_FLUSH);
return dma_ctrl;
}
static void bcmgenet_enable_dma(struct bcmgenet_priv *priv, u32 dma_ctrl)
{
u32 reg;
reg = bcmgenet_rdma_readl(priv, DMA_CTRL);
reg |= dma_ctrl;
bcmgenet_rdma_writel(priv, reg, DMA_CTRL);
reg = bcmgenet_tdma_readl(priv, DMA_CTRL);
reg |= dma_ctrl;
bcmgenet_tdma_writel(priv, reg, DMA_CTRL);
}
static int bcmgenet_open(struct net_device *dev)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
unsigned long dma_ctrl;
u32 reg;
int ret;
netif_dbg(priv, ifup, dev, "bcmgenet_open\n");
/* Turn on the clock */
if (!IS_ERR(priv->clk))
clk_prepare_enable(priv->clk);
/* take MAC out of reset */
bcmgenet_umac_reset(priv);
ret = init_umac(priv);
if (ret)
goto err_clk_disable;
/* disable ethernet MAC while updating its registers */
reg = bcmgenet_umac_readl(priv, UMAC_CMD);
reg &= ~(CMD_TX_EN | CMD_RX_EN);
bcmgenet_umac_writel(priv, reg, UMAC_CMD);
bcmgenet_set_hw_addr(priv, dev->dev_addr);
if (priv->wol_enabled) {
ret = bcmgenet_wol_resume(priv);
if (ret)
return ret;
}
if (phy_is_internal(priv->phydev)) {
reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
reg |= EXT_ENERGY_DET_MASK;
bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
}
/* Disable RX/TX DMA and flush TX queues */
dma_ctrl = bcmgenet_dma_disable(priv);
/* Reinitialize TDMA and RDMA and SW housekeeping */
ret = bcmgenet_init_dma(priv);
if (ret) {
netdev_err(dev, "failed to initialize DMA\n");
goto err_fini_dma;
}
/* Always enable ring 16 - descriptor ring */
bcmgenet_enable_dma(priv, dma_ctrl);
ret = request_irq(priv->irq0, bcmgenet_isr0, IRQF_SHARED,
dev->name, priv);
if (ret < 0) {
netdev_err(dev, "can't request IRQ %d\n", priv->irq0);
goto err_fini_dma;
}
ret = request_irq(priv->irq1, bcmgenet_isr1, IRQF_SHARED,
dev->name, priv);
if (ret < 0) {
netdev_err(dev, "can't request IRQ %d\n", priv->irq1);
goto err_irq0;
}
/* Start the network engine */
napi_enable(&priv->napi);
reg = bcmgenet_umac_readl(priv, UMAC_CMD);
reg |= (CMD_TX_EN | CMD_RX_EN);
bcmgenet_umac_writel(priv, reg, UMAC_CMD);
/* Make sure we reflect the value of CRC_CMD_FWD */
priv->crc_fwd_en = !!(reg & CMD_CRC_FWD);
device_set_wakeup_capable(&dev->dev, 1);
if (phy_is_internal(priv->phydev))
bcmgenet_power_up(priv, GENET_POWER_PASSIVE);
netif_tx_start_all_queues(dev);
if (priv->phydev)
phy_start(priv->phydev);
return 0;
err_irq0:
free_irq(priv->irq0, dev);
err_fini_dma:
bcmgenet_fini_dma(priv);
err_clk_disable:
if (!IS_ERR(priv->clk))
clk_disable_unprepare(priv->clk);
return ret;
}
static int bcmgenet_dma_teardown(struct bcmgenet_priv *priv)
{
int ret = 0;
int timeout = 0;
u32 reg;
/* Disable TDMA to stop add more frames in TX DMA */
reg = bcmgenet_tdma_readl(priv, DMA_CTRL);
reg &= ~DMA_EN;
bcmgenet_tdma_writel(priv, reg, DMA_CTRL);
/* Check TDMA status register to confirm TDMA is disabled */
while (timeout++ < DMA_TIMEOUT_VAL) {
reg = bcmgenet_tdma_readl(priv, DMA_STATUS);
if (reg & DMA_DISABLED)
break;
udelay(1);
}
if (timeout == DMA_TIMEOUT_VAL) {
netdev_warn(priv->dev,
"Timed out while disabling TX DMA\n");
ret = -ETIMEDOUT;
}
/* Wait 10ms for packet drain in both tx and rx dma */
usleep_range(10000, 20000);
/* Disable RDMA */
reg = bcmgenet_rdma_readl(priv, DMA_CTRL);
reg &= ~DMA_EN;
bcmgenet_rdma_writel(priv, reg, DMA_CTRL);
timeout = 0;
/* Check RDMA status register to confirm RDMA is disabled */
while (timeout++ < DMA_TIMEOUT_VAL) {
reg = bcmgenet_rdma_readl(priv, DMA_STATUS);
if (reg & DMA_DISABLED)
break;
udelay(1);
}
if (timeout == DMA_TIMEOUT_VAL) {
netdev_warn(priv->dev,
"Timed out while disabling RX DMA\n");
ret = -ETIMEDOUT;
}
return ret;
}
static int bcmgenet_close(struct net_device *dev)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
int ret;
u32 reg;
netif_dbg(priv, ifdown, dev, "bcmgenet_close\n");
if (priv->phydev)
phy_stop(priv->phydev);
/* Disable MAC receive */
reg = bcmgenet_umac_readl(priv, UMAC_CMD);
reg &= ~CMD_RX_EN;
bcmgenet_umac_writel(priv, reg, UMAC_CMD);
netif_tx_stop_all_queues(dev);
ret = bcmgenet_dma_teardown(priv);
if (ret)
return ret;
/* Disable MAC transmit. TX DMA disabled have to done before this */
reg = bcmgenet_umac_readl(priv, UMAC_CMD);
reg &= ~CMD_TX_EN;
bcmgenet_umac_writel(priv, reg, UMAC_CMD);
napi_disable(&priv->napi);
/* tx reclaim */
bcmgenet_tx_reclaim_all(dev);
bcmgenet_fini_dma(priv);
free_irq(priv->irq0, priv);
free_irq(priv->irq1, priv);
/* Wait for pending work items to complete - we are stopping
* the clock now. Since interrupts are disabled, no new work
* will be scheduled.
*/
cancel_work_sync(&priv->bcmgenet_irq_work);
if (phy_is_internal(priv->phydev))
bcmgenet_power_down(priv, GENET_POWER_PASSIVE);
if (priv->wol_enabled)
clk_enable(priv->clk_wol);
if (!IS_ERR(priv->clk))
clk_disable_unprepare(priv->clk);
return 0;
}
static void bcmgenet_timeout(struct net_device *dev)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
netif_dbg(priv, tx_err, dev, "bcmgenet_timeout\n");
dev->trans_start = jiffies;
dev->stats.tx_errors++;
netif_tx_wake_all_queues(dev);
}
#define MAX_MC_COUNT 16
static inline void bcmgenet_set_mdf_addr(struct bcmgenet_priv *priv,
unsigned char *addr,
int *i,
int *mc)
{
u32 reg;
bcmgenet_umac_writel(priv,
addr[0] << 8 | addr[1], UMAC_MDF_ADDR + (*i * 4));
bcmgenet_umac_writel(priv,
addr[2] << 24 | addr[3] << 16 |
addr[4] << 8 | addr[5],
UMAC_MDF_ADDR + ((*i + 1) * 4));
reg = bcmgenet_umac_readl(priv, UMAC_MDF_CTRL);
reg |= (1 << (MAX_MC_COUNT - *mc));
bcmgenet_umac_writel(priv, reg, UMAC_MDF_CTRL);
*i += 2;
(*mc)++;
}
static void bcmgenet_set_rx_mode(struct net_device *dev)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
struct netdev_hw_addr *ha;
int i, mc;
u32 reg;
netif_dbg(priv, hw, dev, "%s: %08X\n", __func__, dev->flags);
/* Promiscous mode */
reg = bcmgenet_umac_readl(priv, UMAC_CMD);
if (dev->flags & IFF_PROMISC) {
reg |= CMD_PROMISC;
bcmgenet_umac_writel(priv, reg, UMAC_CMD);
bcmgenet_umac_writel(priv, 0, UMAC_MDF_CTRL);
return;
} else {
reg &= ~CMD_PROMISC;
bcmgenet_umac_writel(priv, reg, UMAC_CMD);
}
/* UniMac doesn't support ALLMULTI */
if (dev->flags & IFF_ALLMULTI) {
netdev_warn(dev, "ALLMULTI is not supported\n");
return;
}
/* update MDF filter */
i = 0;
mc = 0;
/* Broadcast */
bcmgenet_set_mdf_addr(priv, dev->broadcast, &i, &mc);
/* my own address.*/
bcmgenet_set_mdf_addr(priv, dev->dev_addr, &i, &mc);
/* Unicast list*/
if (netdev_uc_count(dev) > (MAX_MC_COUNT - mc))
return;
if (!netdev_uc_empty(dev))
netdev_for_each_uc_addr(ha, dev)
bcmgenet_set_mdf_addr(priv, ha->addr, &i, &mc);
/* Multicast */
if (netdev_mc_empty(dev) || netdev_mc_count(dev) >= (MAX_MC_COUNT - mc))
return;
netdev_for_each_mc_addr(ha, dev)
bcmgenet_set_mdf_addr(priv, ha->addr, &i, &mc);
}
/* Set the hardware MAC address. */
static int bcmgenet_set_mac_addr(struct net_device *dev, void *p)
{
struct sockaddr *addr = p;
/* Setting the MAC address at the hardware level is not possible
* without disabling the UniMAC RX/TX enable bits.
*/
if (netif_running(dev))
return -EBUSY;
ether_addr_copy(dev->dev_addr, addr->sa_data);
return 0;
}
static u16 bcmgenet_select_queue(struct net_device *dev,
struct sk_buff *skb, void *accel_priv)
{
return netif_is_multiqueue(dev) ? skb->queue_mapping : 0;
}
static const struct net_device_ops bcmgenet_netdev_ops = {
.ndo_open = bcmgenet_open,
.ndo_stop = bcmgenet_close,
.ndo_start_xmit = bcmgenet_xmit,
.ndo_select_queue = bcmgenet_select_queue,
.ndo_tx_timeout = bcmgenet_timeout,
.ndo_set_rx_mode = bcmgenet_set_rx_mode,
.ndo_set_mac_address = bcmgenet_set_mac_addr,
.ndo_do_ioctl = bcmgenet_ioctl,
.ndo_set_features = bcmgenet_set_features,
};
/* Array of GENET hardware parameters/characteristics */
static struct bcmgenet_hw_params bcmgenet_hw_params[] = {
[GENET_V1] = {
.tx_queues = 0,
.rx_queues = 0,
.bds_cnt = 0,
.bp_in_en_shift = 16,
.bp_in_mask = 0xffff,
.hfb_filter_cnt = 16,
.qtag_mask = 0x1F,
.hfb_offset = 0x1000,
.rdma_offset = 0x2000,
.tdma_offset = 0x3000,
.words_per_bd = 2,
},
[GENET_V2] = {
.tx_queues = 4,
.rx_queues = 4,
.bds_cnt = 32,
.bp_in_en_shift = 16,
.bp_in_mask = 0xffff,
.hfb_filter_cnt = 16,
.qtag_mask = 0x1F,
.tbuf_offset = 0x0600,
.hfb_offset = 0x1000,
.hfb_reg_offset = 0x2000,
.rdma_offset = 0x3000,
.tdma_offset = 0x4000,
.words_per_bd = 2,
.flags = GENET_HAS_EXT,
},
[GENET_V3] = {
.tx_queues = 4,
.rx_queues = 4,
.bds_cnt = 32,
.bp_in_en_shift = 17,
.bp_in_mask = 0x1ffff,
.hfb_filter_cnt = 48,
.qtag_mask = 0x3F,
.tbuf_offset = 0x0600,
.hfb_offset = 0x8000,
.hfb_reg_offset = 0xfc00,
.rdma_offset = 0x10000,
.tdma_offset = 0x11000,
.words_per_bd = 2,
.flags = GENET_HAS_EXT | GENET_HAS_MDIO_INTR,
},
[GENET_V4] = {
.tx_queues = 4,
.rx_queues = 4,
.bds_cnt = 32,
.bp_in_en_shift = 17,
.bp_in_mask = 0x1ffff,
.hfb_filter_cnt = 48,
.qtag_mask = 0x3F,
.tbuf_offset = 0x0600,
.hfb_offset = 0x8000,
.hfb_reg_offset = 0xfc00,
.rdma_offset = 0x2000,
.tdma_offset = 0x4000,
.words_per_bd = 3,
.flags = GENET_HAS_40BITS | GENET_HAS_EXT | GENET_HAS_MDIO_INTR,
},
};
/* Infer hardware parameters from the detected GENET version */
static void bcmgenet_set_hw_params(struct bcmgenet_priv *priv)
{
struct bcmgenet_hw_params *params;
u32 reg;
u8 major;
if (GENET_IS_V4(priv)) {
bcmgenet_dma_regs = bcmgenet_dma_regs_v3plus;
genet_dma_ring_regs = genet_dma_ring_regs_v4;
priv->dma_rx_chk_bit = DMA_RX_CHK_V3PLUS;
priv->version = GENET_V4;
} else if (GENET_IS_V3(priv)) {
bcmgenet_dma_regs = bcmgenet_dma_regs_v3plus;
genet_dma_ring_regs = genet_dma_ring_regs_v123;
priv->dma_rx_chk_bit = DMA_RX_CHK_V3PLUS;
priv->version = GENET_V3;
} else if (GENET_IS_V2(priv)) {
bcmgenet_dma_regs = bcmgenet_dma_regs_v2;
genet_dma_ring_regs = genet_dma_ring_regs_v123;
priv->dma_rx_chk_bit = DMA_RX_CHK_V12;
priv->version = GENET_V2;
} else if (GENET_IS_V1(priv)) {
bcmgenet_dma_regs = bcmgenet_dma_regs_v1;
genet_dma_ring_regs = genet_dma_ring_regs_v123;
priv->dma_rx_chk_bit = DMA_RX_CHK_V12;
priv->version = GENET_V1;
}
/* enum genet_version starts at 1 */
priv->hw_params = &bcmgenet_hw_params[priv->version];
params = priv->hw_params;
/* Read GENET HW version */
reg = bcmgenet_sys_readl(priv, SYS_REV_CTRL);
major = (reg >> 24 & 0x0f);
if (major == 5)
major = 4;
else if (major == 0)
major = 1;
if (major != priv->version) {
dev_err(&priv->pdev->dev,
"GENET version mismatch, got: %d, configured for: %d\n",
major, priv->version);
}
/* Print the GENET core version */
dev_info(&priv->pdev->dev, "GENET " GENET_VER_FMT,
major, (reg >> 16) & 0x0f, reg & 0xffff);
#ifdef CONFIG_PHYS_ADDR_T_64BIT
if (!(params->flags & GENET_HAS_40BITS))
pr_warn("GENET does not support 40-bits PA\n");
#endif
pr_debug("Configuration for version: %d\n"
"TXq: %1d, RXq: %1d, BDs: %1d\n"
"BP << en: %2d, BP msk: 0x%05x\n"
"HFB count: %2d, QTAQ msk: 0x%05x\n"
"TBUF: 0x%04x, HFB: 0x%04x, HFBreg: 0x%04x\n"
"RDMA: 0x%05x, TDMA: 0x%05x\n"
"Words/BD: %d\n",
priv->version,
params->tx_queues, params->rx_queues, params->bds_cnt,
params->bp_in_en_shift, params->bp_in_mask,
params->hfb_filter_cnt, params->qtag_mask,
params->tbuf_offset, params->hfb_offset,
params->hfb_reg_offset,
params->rdma_offset, params->tdma_offset,
params->words_per_bd);
}
static const struct of_device_id bcmgenet_match[] = {
{ .compatible = "brcm,genet-v1", .data = (void *)GENET_V1 },
{ .compatible = "brcm,genet-v2", .data = (void *)GENET_V2 },
{ .compatible = "brcm,genet-v3", .data = (void *)GENET_V3 },
{ .compatible = "brcm,genet-v4", .data = (void *)GENET_V4 },
{ },
};
static int bcmgenet_probe(struct platform_device *pdev)
{
struct device_node *dn = pdev->dev.of_node;
const struct of_device_id *of_id;
struct bcmgenet_priv *priv;
struct net_device *dev;
const void *macaddr;
struct resource *r;
int err = -EIO;
/* Up to GENET_MAX_MQ_CNT + 1 TX queues and a single RX queue */
dev = alloc_etherdev_mqs(sizeof(*priv), GENET_MAX_MQ_CNT + 1, 1);
if (!dev) {
dev_err(&pdev->dev, "can't allocate net device\n");
return -ENOMEM;
}
of_id = of_match_node(bcmgenet_match, dn);
if (!of_id)
return -EINVAL;
priv = netdev_priv(dev);
priv->irq0 = platform_get_irq(pdev, 0);
priv->irq1 = platform_get_irq(pdev, 1);
if (!priv->irq0 || !priv->irq1) {
dev_err(&pdev->dev, "can't find IRQs\n");
err = -EINVAL;
goto err;
}
macaddr = of_get_mac_address(dn);
if (!macaddr) {
dev_err(&pdev->dev, "can't find MAC address\n");
err = -EINVAL;
goto err;
}
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
priv->base = devm_request_and_ioremap(&pdev->dev, r);
if (!priv->base) {
dev_err(&pdev->dev, "can't ioremap\n");
err = -EINVAL;
goto err;
}
SET_NETDEV_DEV(dev, &pdev->dev);
dev_set_drvdata(&pdev->dev, dev);
ether_addr_copy(dev->dev_addr, macaddr);
dev->watchdog_timeo = 2 * HZ;
SET_ETHTOOL_OPS(dev, &bcmgenet_ethtool_ops);
dev->netdev_ops = &bcmgenet_netdev_ops;
netif_napi_add(dev, &priv->napi, bcmgenet_poll, 64);
priv->msg_enable = netif_msg_init(-1, GENET_MSG_DEFAULT);
/* Set hardware features */
dev->hw_features |= NETIF_F_SG | NETIF_F_IP_CSUM |
NETIF_F_IPV6_CSUM | NETIF_F_RXCSUM;
/* Set the needed headroom to account for any possible
* features enabling/disabling at runtime
*/
dev->needed_headroom += 64;
netdev_boot_setup_check(dev);
priv->dev = dev;
priv->pdev = pdev;
priv->version = (enum bcmgenet_version)of_id->data;
bcmgenet_set_hw_params(priv);
spin_lock_init(&priv->lock);
/* Mii wait queue */
init_waitqueue_head(&priv->wq);
/* Always use RX_BUF_LENGTH (2KB) buffer for all chips */
priv->rx_buf_len = RX_BUF_LENGTH;
INIT_WORK(&priv->bcmgenet_irq_work, bcmgenet_irq_task);
priv->clk = devm_clk_get(&priv->pdev->dev, "enet");
if (IS_ERR(priv->clk))
dev_warn(&priv->pdev->dev, "failed to get enet clock\n");
priv->clk_wol = devm_clk_get(&priv->pdev->dev, "enet-wol");
if (IS_ERR(priv->clk_wol))
dev_warn(&priv->pdev->dev, "failed to get enet-wol clock\n");
if (!IS_ERR(priv->clk))
clk_prepare_enable(priv->clk);
err = reset_umac(priv);
if (err)
goto err_clk_disable;
err = bcmgenet_mii_init(dev);
if (err)
goto err_clk_disable;
/* setup number of real queues + 1 (GENET_V1 has 0 hardware queues
* just the ring 16 descriptor based TX
*/
netif_set_real_num_tx_queues(priv->dev, priv->hw_params->tx_queues + 1);
netif_set_real_num_rx_queues(priv->dev, priv->hw_params->rx_queues + 1);
err = register_netdev(dev);
if (err)
goto err_clk_disable;
/* Turn off the main clock, WOL clock is handled separately */
if (!IS_ERR(priv->clk))
clk_disable_unprepare(priv->clk);
return err;
err_clk_disable:
if (!IS_ERR(priv->clk))
clk_disable_unprepare(priv->clk);
err:
free_netdev(dev);
return err;
}
static int bcmgenet_remove(struct platform_device *pdev)
{
struct bcmgenet_priv *priv = dev_to_priv(&pdev->dev);
dev_set_drvdata(&pdev->dev, NULL);
unregister_netdev(priv->dev);
bcmgenet_mii_exit(priv->dev);
free_netdev(priv->dev);
return 0;
}
static struct platform_driver bcmgenet_driver = {
.probe = bcmgenet_probe,
.remove = bcmgenet_remove,
.driver = {
.name = "bcmgenet",
.owner = THIS_MODULE,
.of_match_table = bcmgenet_match,
},
};
module_platform_driver(bcmgenet_driver);
MODULE_AUTHOR("Broadcom Corporation");
MODULE_DESCRIPTION("Broadcom GENET Ethernet controller driver");
MODULE_ALIAS("platform:bcmgenet");
MODULE_LICENSE("GPL");
/*
* Copyright (c) 2014 Broadcom Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*
*
*/
#ifndef __BCMGENET_H__
#define __BCMGENET_H__
#include <linux/skbuff.h>
#include <linux/netdevice.h>
#include <linux/spinlock.h>
#include <linux/clk.h>
#include <linux/mii.h>
#include <linux/if_vlan.h>
#include <linux/phy.h>
/* total number of Buffer Descriptors, same for Rx/Tx */
#define TOTAL_DESC 256
/* which ring is descriptor based */
#define DESC_INDEX 16
/* Body(1500) + EH_SIZE(14) + VLANTAG(4) + BRCMTAG(6) + FCS(4) = 1528.
* 1536 is multiple of 256 bytes
*/
#define ENET_BRCM_TAG_LEN 6
#define ENET_PAD 8
#define ENET_MAX_MTU_SIZE (ETH_DATA_LEN + ETH_HLEN + VLAN_HLEN + \
ENET_BRCM_TAG_LEN + ETH_FCS_LEN + ENET_PAD)
#define DMA_MAX_BURST_LENGTH 0x10
/* misc. configuration */
#define CLEAR_ALL_HFB 0xFF
#define DMA_FC_THRESH_HI (TOTAL_DESC >> 4)
#define DMA_FC_THRESH_LO 5
/* 64B receive/transmit status block */
struct status_64 {
u32 length_status; /* length and peripheral status */
u32 ext_status; /* Extended status*/
u32 rx_csum; /* partial rx checksum */
u32 unused1[9]; /* unused */
u32 tx_csum_info; /* Tx checksum info. */
u32 unused2[3]; /* unused */
};
/* Rx status bits */
#define STATUS_RX_EXT_MASK 0x1FFFFF
#define STATUS_RX_CSUM_MASK 0xFFFF
#define STATUS_RX_CSUM_OK 0x10000
#define STATUS_RX_CSUM_FR 0x20000
#define STATUS_RX_PROTO_TCP 0
#define STATUS_RX_PROTO_UDP 1
#define STATUS_RX_PROTO_ICMP 2
#define STATUS_RX_PROTO_OTHER 3
#define STATUS_RX_PROTO_MASK 3
#define STATUS_RX_PROTO_SHIFT 18
#define STATUS_FILTER_INDEX_MASK 0xFFFF
/* Tx status bits */
#define STATUS_TX_CSUM_START_MASK 0X7FFF
#define STATUS_TX_CSUM_START_SHIFT 16
#define STATUS_TX_CSUM_PROTO_UDP 0x8000
#define STATUS_TX_CSUM_OFFSET_MASK 0x7FFF
#define STATUS_TX_CSUM_LV 0x80000000
/* DMA Descriptor */
#define DMA_DESC_LENGTH_STATUS 0x00 /* in bytes of data in buffer */
#define DMA_DESC_ADDRESS_LO 0x04 /* lower bits of PA */
#define DMA_DESC_ADDRESS_HI 0x08 /* upper 32 bits of PA, GENETv4+ */
/* Rx/Tx common counter group */
struct bcmgenet_pkt_counters {
u32 cnt_64; /* RO Received/Transmited 64 bytes packet */
u32 cnt_127; /* RO Rx/Tx 127 bytes packet */
u32 cnt_255; /* RO Rx/Tx 65-255 bytes packet */
u32 cnt_511; /* RO Rx/Tx 256-511 bytes packet */
u32 cnt_1023; /* RO Rx/Tx 512-1023 bytes packet */
u32 cnt_1518; /* RO Rx/Tx 1024-1518 bytes packet */
u32 cnt_mgv; /* RO Rx/Tx 1519-1522 good VLAN packet */
u32 cnt_2047; /* RO Rx/Tx 1522-2047 bytes packet*/
u32 cnt_4095; /* RO Rx/Tx 2048-4095 bytes packet*/
u32 cnt_9216; /* RO Rx/Tx 4096-9216 bytes packet*/
};
/* RSV, Receive Status Vector */
struct bcmgenet_rx_counters {
struct bcmgenet_pkt_counters pkt_cnt;
u32 pkt; /* RO (0x428) Received pkt count*/
u32 bytes; /* RO Received byte count */
u32 mca; /* RO # of Received multicast pkt */
u32 bca; /* RO # of Receive broadcast pkt */
u32 fcs; /* RO # of Received FCS error */
u32 cf; /* RO # of Received control frame pkt*/
u32 pf; /* RO # of Received pause frame pkt */
u32 uo; /* RO # of unknown op code pkt */
u32 aln; /* RO # of alignment error count */
u32 flr; /* RO # of frame length out of range count */
u32 cde; /* RO # of code error pkt */
u32 fcr; /* RO # of carrier sense error pkt */
u32 ovr; /* RO # of oversize pkt*/
u32 jbr; /* RO # of jabber count */
u32 mtue; /* RO # of MTU error pkt*/
u32 pok; /* RO # of Received good pkt */
u32 uc; /* RO # of unicast pkt */
u32 ppp; /* RO # of PPP pkt */
u32 rcrc; /* RO (0x470),# of CRC match pkt */
};
/* TSV, Transmit Status Vector */
struct bcmgenet_tx_counters {
struct bcmgenet_pkt_counters pkt_cnt;
u32 pkts; /* RO (0x4a8) Transmited pkt */
u32 mca; /* RO # of xmited multicast pkt */
u32 bca; /* RO # of xmited broadcast pkt */
u32 pf; /* RO # of xmited pause frame count */
u32 cf; /* RO # of xmited control frame count */
u32 fcs; /* RO # of xmited FCS error count */
u32 ovr; /* RO # of xmited oversize pkt */
u32 drf; /* RO # of xmited deferral pkt */
u32 edf; /* RO # of xmited Excessive deferral pkt*/
u32 scl; /* RO # of xmited single collision pkt */
u32 mcl; /* RO # of xmited multiple collision pkt*/
u32 lcl; /* RO # of xmited late collision pkt */
u32 ecl; /* RO # of xmited excessive collision pkt*/
u32 frg; /* RO # of xmited fragments pkt*/
u32 ncl; /* RO # of xmited total collision count */
u32 jbr; /* RO # of xmited jabber count*/
u32 bytes; /* RO # of xmited byte count */
u32 pok; /* RO # of xmited good pkt */
u32 uc; /* RO (0x0x4f0)# of xmited unitcast pkt */
};
struct bcmgenet_mib_counters {
struct bcmgenet_rx_counters rx;
struct bcmgenet_tx_counters tx;
u32 rx_runt_cnt;
u32 rx_runt_fcs;
u32 rx_runt_fcs_align;
u32 rx_runt_bytes;
u32 rbuf_ovflow_cnt;
u32 rbuf_err_cnt;
u32 mdf_err_cnt;
};
#define UMAC_HD_BKP_CTRL 0x004
#define HD_FC_EN (1 << 0)
#define HD_FC_BKOFF_OK (1 << 1)
#define IPG_CONFIG_RX_SHIFT 2
#define IPG_CONFIG_RX_MASK 0x1F
#define UMAC_CMD 0x008
#define CMD_TX_EN (1 << 0)
#define CMD_RX_EN (1 << 1)
#define UMAC_SPEED_10 0
#define UMAC_SPEED_100 1
#define UMAC_SPEED_1000 2
#define UMAC_SPEED_2500 3
#define CMD_SPEED_SHIFT 2
#define CMD_SPEED_MASK 3
#define CMD_PROMISC (1 << 4)
#define CMD_PAD_EN (1 << 5)
#define CMD_CRC_FWD (1 << 6)
#define CMD_PAUSE_FWD (1 << 7)
#define CMD_RX_PAUSE_IGNORE (1 << 8)
#define CMD_TX_ADDR_INS (1 << 9)
#define CMD_HD_EN (1 << 10)
#define CMD_SW_RESET (1 << 13)
#define CMD_LCL_LOOP_EN (1 << 15)
#define CMD_AUTO_CONFIG (1 << 22)
#define CMD_CNTL_FRM_EN (1 << 23)
#define CMD_NO_LEN_CHK (1 << 24)
#define CMD_RMT_LOOP_EN (1 << 25)
#define CMD_PRBL_EN (1 << 27)
#define CMD_TX_PAUSE_IGNORE (1 << 28)
#define CMD_TX_RX_EN (1 << 29)
#define CMD_RUNT_FILTER_DIS (1 << 30)
#define UMAC_MAC0 0x00C
#define UMAC_MAC1 0x010
#define UMAC_MAX_FRAME_LEN 0x014
#define UMAC_TX_FLUSH 0x334
#define UMAC_MIB_START 0x400
#define UMAC_MDIO_CMD 0x614
#define MDIO_START_BUSY (1 << 29)
#define MDIO_READ_FAIL (1 << 28)
#define MDIO_RD (2 << 26)
#define MDIO_WR (1 << 26)
#define MDIO_PMD_SHIFT 21
#define MDIO_PMD_MASK 0x1F
#define MDIO_REG_SHIFT 16
#define MDIO_REG_MASK 0x1F
#define UMAC_RBUF_OVFL_CNT 0x61C
#define UMAC_MPD_CTRL 0x620
#define MPD_EN (1 << 0)
#define MPD_PW_EN (1 << 27)
#define MPD_MSEQ_LEN_SHIFT 16
#define MPD_MSEQ_LEN_MASK 0xFF
#define UMAC_MPD_PW_MS 0x624
#define UMAC_MPD_PW_LS 0x628
#define UMAC_RBUF_ERR_CNT 0x634
#define UMAC_MDF_ERR_CNT 0x638
#define UMAC_MDF_CTRL 0x650
#define UMAC_MDF_ADDR 0x654
#define UMAC_MIB_CTRL 0x580
#define MIB_RESET_RX (1 << 0)
#define MIB_RESET_RUNT (1 << 1)
#define MIB_RESET_TX (1 << 2)
#define RBUF_CTRL 0x00
#define RBUF_64B_EN (1 << 0)
#define RBUF_ALIGN_2B (1 << 1)
#define RBUF_BAD_DIS (1 << 2)
#define RBUF_STATUS 0x0C
#define RBUF_STATUS_WOL (1 << 0)
#define RBUF_STATUS_MPD_INTR_ACTIVE (1 << 1)
#define RBUF_STATUS_ACPI_INTR_ACTIVE (1 << 2)
#define RBUF_CHK_CTRL 0x14
#define RBUF_RXCHK_EN (1 << 0)
#define RBUF_SKIP_FCS (1 << 4)
#define RBUF_TBUF_SIZE_CTRL 0xb4
#define RBUF_HFB_CTRL_V1 0x38
#define RBUF_HFB_FILTER_EN_SHIFT 16
#define RBUF_HFB_FILTER_EN_MASK 0xffff0000
#define RBUF_HFB_EN (1 << 0)
#define RBUF_HFB_256B (1 << 1)
#define RBUF_ACPI_EN (1 << 2)
#define RBUF_HFB_LEN_V1 0x3C
#define RBUF_FLTR_LEN_MASK 0xFF
#define RBUF_FLTR_LEN_SHIFT 8
#define TBUF_CTRL 0x00
#define TBUF_BP_MC 0x0C
#define TBUF_CTRL_V1 0x80
#define TBUF_BP_MC_V1 0xA0
#define HFB_CTRL 0x00
#define HFB_FLT_ENABLE_V3PLUS 0x04
#define HFB_FLT_LEN_V2 0x04
#define HFB_FLT_LEN_V3PLUS 0x1C
/* uniMac intrl2 registers */
#define INTRL2_CPU_STAT 0x00
#define INTRL2_CPU_SET 0x04
#define INTRL2_CPU_CLEAR 0x08
#define INTRL2_CPU_MASK_STATUS 0x0C
#define INTRL2_CPU_MASK_SET 0x10
#define INTRL2_CPU_MASK_CLEAR 0x14
/* INTRL2 instance 0 definitions */
#define UMAC_IRQ_SCB (1 << 0)
#define UMAC_IRQ_EPHY (1 << 1)
#define UMAC_IRQ_PHY_DET_R (1 << 2)
#define UMAC_IRQ_PHY_DET_F (1 << 3)
#define UMAC_IRQ_LINK_UP (1 << 4)
#define UMAC_IRQ_LINK_DOWN (1 << 5)
#define UMAC_IRQ_UMAC (1 << 6)
#define UMAC_IRQ_UMAC_TSV (1 << 7)
#define UMAC_IRQ_TBUF_UNDERRUN (1 << 8)
#define UMAC_IRQ_RBUF_OVERFLOW (1 << 9)
#define UMAC_IRQ_HFB_SM (1 << 10)
#define UMAC_IRQ_HFB_MM (1 << 11)
#define UMAC_IRQ_MPD_R (1 << 12)
#define UMAC_IRQ_RXDMA_MBDONE (1 << 13)
#define UMAC_IRQ_RXDMA_PDONE (1 << 14)
#define UMAC_IRQ_RXDMA_BDONE (1 << 15)
#define UMAC_IRQ_TXDMA_MBDONE (1 << 16)
#define UMAC_IRQ_TXDMA_PDONE (1 << 17)
#define UMAC_IRQ_TXDMA_BDONE (1 << 18)
/* Only valid for GENETv3+ */
#define UMAC_IRQ_MDIO_DONE (1 << 23)
#define UMAC_IRQ_MDIO_ERROR (1 << 24)
/* Register block offsets */
#define GENET_SYS_OFF 0x0000
#define GENET_GR_BRIDGE_OFF 0x0040
#define GENET_EXT_OFF 0x0080
#define GENET_INTRL2_0_OFF 0x0200
#define GENET_INTRL2_1_OFF 0x0240
#define GENET_RBUF_OFF 0x0300
#define GENET_UMAC_OFF 0x0800
/* SYS block offsets and register definitions */
#define SYS_REV_CTRL 0x00
#define SYS_PORT_CTRL 0x04
#define PORT_MODE_INT_EPHY 0
#define PORT_MODE_INT_GPHY 1
#define PORT_MODE_EXT_EPHY 2
#define PORT_MODE_EXT_GPHY 3
#define PORT_MODE_EXT_RVMII_25 (4 | BIT(4))
#define PORT_MODE_EXT_RVMII_50 4
#define LED_ACT_SOURCE_MAC (1 << 9)
#define SYS_RBUF_FLUSH_CTRL 0x08
#define SYS_TBUF_FLUSH_CTRL 0x0C
#define RBUF_FLUSH_CTRL_V1 0x04
/* Ext block register offsets and definitions */
#define EXT_EXT_PWR_MGMT 0x00
#define EXT_PWR_DOWN_BIAS (1 << 0)
#define EXT_PWR_DOWN_DLL (1 << 1)
#define EXT_PWR_DOWN_PHY (1 << 2)
#define EXT_PWR_DN_EN_LD (1 << 3)
#define EXT_ENERGY_DET (1 << 4)
#define EXT_IDDQ_FROM_PHY (1 << 5)
#define EXT_PHY_RESET (1 << 8)
#define EXT_ENERGY_DET_MASK (1 << 12)
#define EXT_RGMII_OOB_CTRL 0x0C
#define RGMII_MODE_EN (1 << 0)
#define RGMII_LINK (1 << 4)
#define OOB_DISABLE (1 << 5)
#define ID_MODE_DIS (1 << 16)
#define EXT_GPHY_CTRL 0x1C
#define EXT_CFG_IDDQ_BIAS (1 << 0)
#define EXT_CFG_PWR_DOWN (1 << 1)
#define EXT_GPHY_RESET (1 << 5)
/* DMA rings size */
#define DMA_RING_SIZE (0x40)
#define DMA_RINGS_SIZE (DMA_RING_SIZE * (DESC_INDEX + 1))
/* DMA registers common definitions */
#define DMA_RW_POINTER_MASK 0x1FF
#define DMA_P_INDEX_DISCARD_CNT_MASK 0xFFFF
#define DMA_P_INDEX_DISCARD_CNT_SHIFT 16
#define DMA_BUFFER_DONE_CNT_MASK 0xFFFF
#define DMA_BUFFER_DONE_CNT_SHIFT 16
#define DMA_P_INDEX_MASK 0xFFFF
#define DMA_C_INDEX_MASK 0xFFFF
/* DMA ring size register */
#define DMA_RING_SIZE_MASK 0xFFFF
#define DMA_RING_SIZE_SHIFT 16
#define DMA_RING_BUFFER_SIZE_MASK 0xFFFF
/* DMA interrupt threshold register */
#define DMA_INTR_THRESHOLD_MASK 0x00FF
/* DMA XON/XOFF register */
#define DMA_XON_THREHOLD_MASK 0xFFFF
#define DMA_XOFF_THRESHOLD_MASK 0xFFFF
#define DMA_XOFF_THRESHOLD_SHIFT 16
/* DMA flow period register */
#define DMA_FLOW_PERIOD_MASK 0xFFFF
#define DMA_MAX_PKT_SIZE_MASK 0xFFFF
#define DMA_MAX_PKT_SIZE_SHIFT 16
/* DMA control register */
#define DMA_EN (1 << 0)
#define DMA_RING_BUF_EN_SHIFT 0x01
#define DMA_RING_BUF_EN_MASK 0xFFFF
#define DMA_TSB_SWAP_EN (1 << 20)
/* DMA status register */
#define DMA_DISABLED (1 << 0)
#define DMA_DESC_RAM_INIT_BUSY (1 << 1)
/* DMA SCB burst size register */
#define DMA_SCB_BURST_SIZE_MASK 0x1F
/* DMA activity vector register */
#define DMA_ACTIVITY_VECTOR_MASK 0x1FFFF
/* DMA backpressure mask register */
#define DMA_BACKPRESSURE_MASK 0x1FFFF
#define DMA_PFC_ENABLE (1 << 31)
/* DMA backpressure status register */
#define DMA_BACKPRESSURE_STATUS_MASK 0x1FFFF
/* DMA override register */
#define DMA_LITTLE_ENDIAN_MODE (1 << 0)
#define DMA_REGISTER_MODE (1 << 1)
/* DMA timeout register */
#define DMA_TIMEOUT_MASK 0xFFFF
#define DMA_TIMEOUT_VAL 5000 /* micro seconds */
/* TDMA rate limiting control register */
#define DMA_RATE_LIMIT_EN_MASK 0xFFFF
/* TDMA arbitration control register */
#define DMA_ARBITER_MODE_MASK 0x03
#define DMA_RING_BUF_PRIORITY_MASK 0x1F
#define DMA_RING_BUF_PRIORITY_SHIFT 5
#define DMA_RATE_ADJ_MASK 0xFF
/* Tx/Rx Dma Descriptor common bits*/
#define DMA_BUFLENGTH_MASK 0x0fff
#define DMA_BUFLENGTH_SHIFT 16
#define DMA_OWN 0x8000
#define DMA_EOP 0x4000
#define DMA_SOP 0x2000
#define DMA_WRAP 0x1000
/* Tx specific Dma descriptor bits */
#define DMA_TX_UNDERRUN 0x0200
#define DMA_TX_APPEND_CRC 0x0040
#define DMA_TX_OW_CRC 0x0020
#define DMA_TX_DO_CSUM 0x0010
#define DMA_TX_QTAG_SHIFT 7
/* Rx Specific Dma descriptor bits */
#define DMA_RX_CHK_V3PLUS 0x8000
#define DMA_RX_CHK_V12 0x1000
#define DMA_RX_BRDCAST 0x0040
#define DMA_RX_MULT 0x0020
#define DMA_RX_LG 0x0010
#define DMA_RX_NO 0x0008
#define DMA_RX_RXER 0x0004
#define DMA_RX_CRC_ERROR 0x0002
#define DMA_RX_OV 0x0001
#define DMA_RX_FI_MASK 0x001F
#define DMA_RX_FI_SHIFT 0x0007
#define DMA_DESC_ALLOC_MASK 0x00FF
#define DMA_ARBITER_RR 0x00
#define DMA_ARBITER_WRR 0x01
#define DMA_ARBITER_SP 0x02
struct enet_cb {
struct sk_buff *skb;
void __iomem *bd_addr;
DEFINE_DMA_UNMAP_ADDR(dma_addr);
DEFINE_DMA_UNMAP_LEN(dma_len);
};
/* power management mode */
enum bcmgenet_power_mode {
GENET_POWER_CABLE_SENSE = 0,
GENET_POWER_PASSIVE,
};
struct bcmgenet_priv;
/* We support both runtime GENET detection and compile-time
* to optimize code-paths for a given hardware
*/
enum bcmgenet_version {
GENET_V1 = 1,
GENET_V2,
GENET_V3,
GENET_V4
};
#define GENET_IS_V1(p) ((p)->version == GENET_V1)
#define GENET_IS_V2(p) ((p)->version == GENET_V2)
#define GENET_IS_V3(p) ((p)->version == GENET_V3)
#define GENET_IS_V4(p) ((p)->version == GENET_V4)
/* Hardware flags */
#define GENET_HAS_40BITS (1 << 0)
#define GENET_HAS_EXT (1 << 1)
#define GENET_HAS_MDIO_INTR (1 << 2)
/* BCMGENET hardware parameters, keep this structure nicely aligned
* since it is going to be used in hot paths
*/
struct bcmgenet_hw_params {
u8 tx_queues;
u8 rx_queues;
u8 bds_cnt;
u8 bp_in_en_shift;
u32 bp_in_mask;
u8 hfb_filter_cnt;
u8 qtag_mask;
u16 tbuf_offset;
u32 hfb_offset;
u32 hfb_reg_offset;
u32 rdma_offset;
u32 tdma_offset;
u32 words_per_bd;
u32 flags;
};
struct bcmgenet_tx_ring {
spinlock_t lock; /* ring lock */
unsigned int index; /* ring index */
unsigned int queue; /* queue index */
struct enet_cb *cbs; /* tx ring buffer control block*/
unsigned int size; /* size of each tx ring */
unsigned int c_index; /* last consumer index of each ring*/
unsigned int free_bds; /* # of free bds for each ring */
unsigned int write_ptr; /* Tx ring write pointer SW copy */
unsigned int prod_index; /* Tx ring producer index SW copy */
unsigned int cb_ptr; /* Tx ring initial CB ptr */
unsigned int end_ptr; /* Tx ring end CB ptr */
void (*int_enable)(struct bcmgenet_priv *priv,
struct bcmgenet_tx_ring *);
void (*int_disable)(struct bcmgenet_priv *priv,
struct bcmgenet_tx_ring *);
};
/* device context */
struct bcmgenet_priv {
void __iomem *base;
enum bcmgenet_version version;
struct net_device *dev;
spinlock_t lock;
spinlock_t bh_lock;
u32 int0_mask;
u32 int1_mask;
/* NAPI for descriptor based rx */
struct napi_struct napi ____cacheline_aligned;
/* transmit variables */
void __iomem *tx_bds;
struct enet_cb *tx_cbs;
unsigned int num_tx_bds;
struct bcmgenet_tx_ring tx_rings[DESC_INDEX + 1];
/* receive variables */
void __iomem *rx_bds;
void __iomem *rx_bd_assign_ptr;
int rx_bd_assign_index;
struct enet_cb *rx_cbs;
unsigned int num_rx_bds;
unsigned int rx_buf_len;
unsigned int rx_read_ptr;
unsigned int rx_c_index;
/* other misc variables */
struct bcmgenet_hw_params *hw_params;
/* MDIO bus variables */
wait_queue_head_t wq;
struct phy_device *phydev;
struct device_node *phy_dn;
struct mii_bus *mii_bus;
/* PHY device variables */
int old_duplex;
int old_link;
int old_pause;
phy_interface_t phy_interface;
int phy_addr;
int ext_phy;
/* Interrupt variables */
struct work_struct bcmgenet_irq_work;
int irq0;
int irq1;
unsigned int irq0_stat;
unsigned int irq1_stat;
/* HW descriptors/checksum variables */
bool desc_64b_en;
bool desc_rxchk_en;
bool crc_fwd_en;
unsigned int dma_rx_chk_bit;
u32 msg_enable;
struct clk *clk;
struct platform_device *pdev;
/* WOL */
unsigned long wol_enabled;
struct clk *clk_wol;
u32 wolopts;
struct bcmgenet_mib_counters mib;
};
#define GENET_IO_MACRO(name, offset) \
static inline u32 bcmgenet_##name##_readl(struct bcmgenet_priv *priv, \
u32 off) \
{ \
return __raw_readl(priv->base + offset + off); \
} \
static inline void bcmgenet_##name##_writel(struct bcmgenet_priv *priv, \
u32 val, u32 off) \
{ \
__raw_writel(val, priv->base + offset + off); \
}
GENET_IO_MACRO(ext, GENET_EXT_OFF);
GENET_IO_MACRO(umac, GENET_UMAC_OFF);
GENET_IO_MACRO(sys, GENET_SYS_OFF);
/* interrupt l2 registers accessors */
GENET_IO_MACRO(intrl2_0, GENET_INTRL2_0_OFF);
GENET_IO_MACRO(intrl2_1, GENET_INTRL2_1_OFF);
/* HFB register accessors */
GENET_IO_MACRO(hfb, priv->hw_params->hfb_offset);
/* GENET v2+ HFB control and filter len helpers */
GENET_IO_MACRO(hfb_reg, priv->hw_params->hfb_reg_offset);
/* RBUF register accessors */
GENET_IO_MACRO(rbuf, GENET_RBUF_OFF);
/* MDIO routines */
int bcmgenet_mii_init(struct net_device *dev);
int bcmgenet_mii_config(struct net_device *dev);
void bcmgenet_mii_exit(struct net_device *dev);
void bcmgenet_mii_reset(struct net_device *dev);
#endif /* __BCMGENET_H__ */
/*
* Broadcom GENET MDIO routines
*
* Copyright (c) 2014 Broadcom Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
#include <linux/types.h>
#include <linux/delay.h>
#include <linux/wait.h>
#include <linux/mii.h>
#include <linux/ethtool.h>
#include <linux/bitops.h>
#include <linux/netdevice.h>
#include <linux/platform_device.h>
#include <linux/phy.h>
#include <linux/phy_fixed.h>
#include <linux/brcmphy.h>
#include <linux/of.h>
#include <linux/of_net.h>
#include <linux/of_mdio.h>
#include "bcmgenet.h"
/* read a value from the MII */
static int bcmgenet_mii_read(struct mii_bus *bus, int phy_id, int location)
{
int ret;
struct net_device *dev = bus->priv;
struct bcmgenet_priv *priv = netdev_priv(dev);
u32 reg;
bcmgenet_umac_writel(priv, (MDIO_RD | (phy_id << MDIO_PMD_SHIFT) |
(location << MDIO_REG_SHIFT)), UMAC_MDIO_CMD);
/* Start MDIO transaction*/
reg = bcmgenet_umac_readl(priv, UMAC_MDIO_CMD);
reg |= MDIO_START_BUSY;
bcmgenet_umac_writel(priv, reg, UMAC_MDIO_CMD);
wait_event_timeout(priv->wq,
!(bcmgenet_umac_readl(priv, UMAC_MDIO_CMD)
& MDIO_START_BUSY),
HZ / 100);
ret = bcmgenet_umac_readl(priv, UMAC_MDIO_CMD);
if (ret & MDIO_READ_FAIL)
return -EIO;
return ret & 0xffff;
}
/* write a value to the MII */
static int bcmgenet_mii_write(struct mii_bus *bus, int phy_id,
int location, u16 val)
{
struct net_device *dev = bus->priv;
struct bcmgenet_priv *priv = netdev_priv(dev);
u32 reg;
bcmgenet_umac_writel(priv, (MDIO_WR | (phy_id << MDIO_PMD_SHIFT) |
(location << MDIO_REG_SHIFT) | (0xffff & val)),
UMAC_MDIO_CMD);
reg = bcmgenet_umac_readl(priv, UMAC_MDIO_CMD);
reg |= MDIO_START_BUSY;
bcmgenet_umac_writel(priv, reg, UMAC_MDIO_CMD);
wait_event_timeout(priv->wq,
!(bcmgenet_umac_readl(priv, UMAC_MDIO_CMD) &
MDIO_START_BUSY),
HZ / 100);
return 0;
}
/* setup netdev link state when PHY link status change and
* update UMAC and RGMII block when link up
*/
static void bcmgenet_mii_setup(struct net_device *dev)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
struct phy_device *phydev = priv->phydev;
u32 reg, cmd_bits = 0;
unsigned int status_changed = 0;
if (priv->old_link != phydev->link) {
status_changed = 1;
priv->old_link = phydev->link;
}
if (phydev->link) {
/* program UMAC and RGMII block based on established link
* speed, pause, and duplex.
* the speed set in umac->cmd tell RGMII block which clock
* 25MHz(100Mbps)/125MHz(1Gbps) to use for transmit.
* receive clock is provided by PHY.
*/
reg = bcmgenet_ext_readl(priv, EXT_RGMII_OOB_CTRL);
reg &= ~OOB_DISABLE;
reg |= RGMII_LINK;
bcmgenet_ext_writel(priv, reg, EXT_RGMII_OOB_CTRL);
/* speed */
if (phydev->speed == SPEED_1000)
cmd_bits = UMAC_SPEED_1000;
else if (phydev->speed == SPEED_100)
cmd_bits = UMAC_SPEED_100;
else
cmd_bits = UMAC_SPEED_10;
cmd_bits <<= CMD_SPEED_SHIFT;
if (priv->old_duplex != phydev->duplex) {
status_changed = 1;
priv->old_duplex = phydev->duplex;
}
/* duplex */
if (phydev->duplex != DUPLEX_FULL)
cmd_bits |= CMD_HD_EN;
if (priv->old_pause != phydev->pause) {
status_changed = 1;
priv->old_pause = phydev->pause;
}
/* pause capability */
if (!phydev->pause)
cmd_bits |= CMD_RX_PAUSE_IGNORE | CMD_TX_PAUSE_IGNORE;
reg = bcmgenet_umac_readl(priv, UMAC_CMD);
reg &= ~((CMD_SPEED_MASK << CMD_SPEED_SHIFT) |
CMD_HD_EN |
CMD_RX_PAUSE_IGNORE | CMD_TX_PAUSE_IGNORE);
reg |= cmd_bits;
bcmgenet_umac_writel(priv, reg, UMAC_CMD);
}
if (status_changed)
phy_print_status(phydev);
}
void bcmgenet_mii_reset(struct net_device *dev)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
if (priv->phydev) {
phy_init_hw(priv->phydev);
phy_start_aneg(priv->phydev);
}
}
static void bcmgenet_ephy_power_up(struct net_device *dev)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
u32 reg = 0;
/* EXT_GPHY_CTRL is only valid for GENETv4 and onward */
if (!GENET_IS_V4(priv))
return;
reg = bcmgenet_ext_readl(priv, EXT_GPHY_CTRL);
reg &= ~(EXT_CFG_IDDQ_BIAS | EXT_CFG_PWR_DOWN);
reg |= EXT_GPHY_RESET;
bcmgenet_ext_writel(priv, reg, EXT_GPHY_CTRL);
mdelay(2);
reg &= ~EXT_GPHY_RESET;
bcmgenet_ext_writel(priv, reg, EXT_GPHY_CTRL);
udelay(20);
}
static void bcmgenet_internal_phy_setup(struct net_device *dev)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
u32 reg;
/* Power up EPHY */
bcmgenet_ephy_power_up(dev);
/* enable APD */
reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
reg |= EXT_PWR_DN_EN_LD;
bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
bcmgenet_mii_reset(dev);
}
static void bcmgenet_moca_phy_setup(struct bcmgenet_priv *priv)
{
u32 reg;
/* Speed settings are set in bcmgenet_mii_setup() */
reg = bcmgenet_sys_readl(priv, SYS_PORT_CTRL);
reg |= LED_ACT_SOURCE_MAC;
bcmgenet_sys_writel(priv, reg, SYS_PORT_CTRL);
}
int bcmgenet_mii_config(struct net_device *dev)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
struct phy_device *phydev = priv->phydev;
struct device *kdev = &priv->pdev->dev;
const char *phy_name = NULL;
u32 id_mode_dis = 0;
u32 port_ctrl;
u32 reg;
priv->ext_phy = !phy_is_internal(priv->phydev) &&
(priv->phy_interface != PHY_INTERFACE_MODE_MOCA);
if (phy_is_internal(priv->phydev))
priv->phy_interface = PHY_INTERFACE_MODE_NA;
switch (priv->phy_interface) {
case PHY_INTERFACE_MODE_NA:
case PHY_INTERFACE_MODE_MOCA:
/* Irrespective of the actually configured PHY speed (100 or
* 1000) GENETv4 only has an internal GPHY so we will just end
* up masking the Gigabit features from what we support, not
* switching to the EPHY
*/
if (GENET_IS_V4(priv))
port_ctrl = PORT_MODE_INT_GPHY;
else
port_ctrl = PORT_MODE_INT_EPHY;
bcmgenet_sys_writel(priv, port_ctrl, SYS_PORT_CTRL);
if (phy_is_internal(priv->phydev)) {
phy_name = "internal PHY";
bcmgenet_internal_phy_setup(dev);
} else if (priv->phy_interface == PHY_INTERFACE_MODE_MOCA) {
phy_name = "MoCA";
bcmgenet_moca_phy_setup(priv);
}
break;
case PHY_INTERFACE_MODE_MII:
phy_name = "external MII";
phydev->supported &= PHY_BASIC_FEATURES;
bcmgenet_sys_writel(priv,
PORT_MODE_EXT_EPHY, SYS_PORT_CTRL);
break;
case PHY_INTERFACE_MODE_REVMII:
phy_name = "external RvMII";
/* of_mdiobus_register took care of reading the 'max-speed'
* PHY property for us, effectively limiting the PHY supported
* capabilities, use that knowledge to also configure the
* Reverse MII interface correctly.
*/
if ((priv->phydev->supported & PHY_BASIC_FEATURES) ==
PHY_BASIC_FEATURES)
port_ctrl = PORT_MODE_EXT_RVMII_25;
else
port_ctrl = PORT_MODE_EXT_RVMII_50;
bcmgenet_sys_writel(priv, port_ctrl, SYS_PORT_CTRL);
break;
case PHY_INTERFACE_MODE_RGMII:
/* RGMII_NO_ID: TXC transitions at the same time as TXD
* (requires PCB or receiver-side delay)
* RGMII: Add 2ns delay on TXC (90 degree shift)
*
* ID is implicitly disabled for 100Mbps (RG)MII operation.
*/
id_mode_dis = BIT(16);
/* fall through */
case PHY_INTERFACE_MODE_RGMII_TXID:
if (id_mode_dis)
phy_name = "external RGMII (no delay)";
else
phy_name = "external RGMII (TX delay)";
reg = bcmgenet_ext_readl(priv, EXT_RGMII_OOB_CTRL);
reg |= RGMII_MODE_EN | id_mode_dis;
bcmgenet_ext_writel(priv, reg, EXT_RGMII_OOB_CTRL);
bcmgenet_sys_writel(priv,
PORT_MODE_EXT_GPHY, SYS_PORT_CTRL);
break;
default:
dev_err(kdev, "unknown phy mode: %d\n", priv->phy_interface);
return -EINVAL;
}
dev_info(kdev, "configuring instance for %s\n", phy_name);
return 0;
}
static int bcmgenet_mii_probe(struct net_device *dev)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
struct phy_device *phydev;
unsigned int phy_flags;
int ret;
if (priv->phydev) {
pr_info("PHY already attached\n");
return 0;
}
if (priv->phy_dn)
phydev = of_phy_connect(dev, priv->phy_dn,
bcmgenet_mii_setup, 0,
priv->phy_interface);
else
phydev = of_phy_connect_fixed_link(dev,
bcmgenet_mii_setup,
priv->phy_interface);
if (!phydev) {
pr_err("could not attach to PHY\n");
return -ENODEV;
}
priv->old_link = -1;
priv->old_duplex = -1;
priv->old_pause = -1;
priv->phydev = phydev;
/* Configure port multiplexer based on what the probed PHY device since
* reading the 'max-speed' property determines the maximum supported
* PHY speed which is needed for bcmgenet_mii_config() to configure
* things appropriately.
*/
ret = bcmgenet_mii_config(dev);
if (ret) {
phy_disconnect(priv->phydev);
return ret;
}
phy_flags = PHY_BRCM_100MBPS_WAR;
/* workarounds are only needed for 100Mpbs PHYs, and
* never on GENET V1 hardware
*/
if ((phydev->supported & PHY_GBIT_FEATURES) || GENET_IS_V1(priv))
phy_flags = 0;
phydev->dev_flags |= phy_flags;
phydev->advertising = phydev->supported;
/* The internal PHY has its link interrupts routed to the
* Ethernet MAC ISRs
*/
if (phy_is_internal(priv->phydev))
priv->mii_bus->irq[phydev->addr] = PHY_IGNORE_INTERRUPT;
else
priv->mii_bus->irq[phydev->addr] = PHY_POLL;
pr_info("attached PHY at address %d [%s]\n",
phydev->addr, phydev->drv->name);
return 0;
}
static int bcmgenet_mii_alloc(struct bcmgenet_priv *priv)
{
struct mii_bus *bus;
if (priv->mii_bus)
return 0;
priv->mii_bus = mdiobus_alloc();
if (!priv->mii_bus) {
pr_err("failed to allocate\n");
return -ENOMEM;
}
bus = priv->mii_bus;
bus->priv = priv->dev;
bus->name = "bcmgenet MII bus";
bus->parent = &priv->pdev->dev;
bus->read = bcmgenet_mii_read;
bus->write = bcmgenet_mii_write;
snprintf(bus->id, MII_BUS_ID_SIZE, "%s-%d",
priv->pdev->name, priv->pdev->id);
bus->irq = kzalloc(sizeof(int) * PHY_MAX_ADDR, GFP_KERNEL);
if (!bus->irq) {
mdiobus_free(priv->mii_bus);
return -ENOMEM;
}
return 0;
}
static int bcmgenet_mii_of_init(struct bcmgenet_priv *priv)
{
struct device_node *dn = priv->pdev->dev.of_node;
struct device *kdev = &priv->pdev->dev;
struct device_node *mdio_dn;
char *compat;
int ret;
compat = kasprintf(GFP_KERNEL, "brcm,genet-mdio-v%d", priv->version);
if (!compat)
return -ENOMEM;
mdio_dn = of_find_compatible_node(dn, NULL, compat);
kfree(compat);
if (!mdio_dn) {
dev_err(kdev, "unable to find MDIO bus node\n");
return -ENODEV;
}
ret = of_mdiobus_register(priv->mii_bus, mdio_dn);
if (ret) {
dev_err(kdev, "failed to register MDIO bus\n");
return ret;
}
/* Fetch the PHY phandle */
priv->phy_dn = of_parse_phandle(dn, "phy-handle", 0);
/* Get the link mode */
priv->phy_interface = of_get_phy_mode(dn);
return 0;
}
int bcmgenet_mii_init(struct net_device *dev)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
int ret;
ret = bcmgenet_mii_alloc(priv);
if (ret)
return ret;
ret = bcmgenet_mii_of_init(priv);
if (ret)
goto out_free;
ret = bcmgenet_mii_probe(dev);
if (ret)
goto out;
return 0;
out:
mdiobus_unregister(priv->mii_bus);
out_free:
kfree(priv->mii_bus->irq);
mdiobus_free(priv->mii_bus);
return ret;
}
void bcmgenet_mii_exit(struct net_device *dev)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
mdiobus_unregister(priv->mii_bus);
kfree(priv->mii_bus->irq);
mdiobus_free(priv->mii_bus);
}
...@@ -71,6 +71,12 @@ config BCM63XX_PHY ...@@ -71,6 +71,12 @@ config BCM63XX_PHY
---help--- ---help---
Currently supports the 6348 and 6358 PHYs. Currently supports the 6348 and 6358 PHYs.
config BCM7XXX_PHY
tristate "Drivers for Broadcom 7xxx SOCs internal PHYs"
---help---
Currently supports the BCM7366, BCM7439, BCM7445, and
40nm and 65nm generation of BCM7xxx Set Top Box SoCs.
config BCM87XX_PHY config BCM87XX_PHY
tristate "Driver for Broadcom BCM8706 and BCM8727 PHYs" tristate "Driver for Broadcom BCM8706 and BCM8727 PHYs"
help help
......
...@@ -12,6 +12,7 @@ obj-$(CONFIG_SMSC_PHY) += smsc.o ...@@ -12,6 +12,7 @@ obj-$(CONFIG_SMSC_PHY) += smsc.o
obj-$(CONFIG_VITESSE_PHY) += vitesse.o obj-$(CONFIG_VITESSE_PHY) += vitesse.o
obj-$(CONFIG_BROADCOM_PHY) += broadcom.o obj-$(CONFIG_BROADCOM_PHY) += broadcom.o
obj-$(CONFIG_BCM63XX_PHY) += bcm63xx.o obj-$(CONFIG_BCM63XX_PHY) += bcm63xx.o
obj-$(CONFIG_BCM7XXX_PHY) += bcm7xxx.o
obj-$(CONFIG_BCM87XX_PHY) += bcm87xx.o obj-$(CONFIG_BCM87XX_PHY) += bcm87xx.o
obj-$(CONFIG_ICPLUS_PHY) += icplus.o obj-$(CONFIG_ICPLUS_PHY) += icplus.o
obj-$(CONFIG_REALTEK_PHY) += realtek.o obj-$(CONFIG_REALTEK_PHY) += realtek.o
......
/*
* Broadcom BCM7xxx internal transceivers support.
*
* Copyright (C) 2014, Broadcom Corporation
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/module.h>
#include <linux/phy.h>
#include <linux/delay.h>
#include <linux/bitops.h>
#include <linux/brcmphy.h>
/* Broadcom BCM7xxx internal PHY registers */
#define MII_BCM7XXX_CHANNEL_WIDTH 0x2000
/* 40nm only register definitions */
#define MII_BCM7XXX_100TX_AUX_CTL 0x10
#define MII_BCM7XXX_100TX_FALSE_CAR 0x13
#define MII_BCM7XXX_100TX_DISC 0x14
#define MII_BCM7XXX_AUX_MODE 0x1d
#define MII_BCM7XX_64CLK_MDIO BIT(12)
#define MII_BCM7XXX_CORE_BASE1E 0x1e
#define MII_BCM7XXX_TEST 0x1f
#define MII_BCM7XXX_SHD_MODE_2 BIT(2)
static int bcm7445_config_init(struct phy_device *phydev)
{
int ret;
const struct bcm7445_regs {
int reg;
u16 value;
} bcm7445_regs_cfg[] = {
/* increases ADC latency by 24ns */
{ MII_BCM54XX_EXP_SEL, 0x0038 },
{ MII_BCM54XX_EXP_DATA, 0xAB95 },
/* increases internal 1V LDO voltage by 5% */
{ MII_BCM54XX_EXP_SEL, 0x2038 },
{ MII_BCM54XX_EXP_DATA, 0xBB22 },
/* reduce RX low pass filter corner frequency */
{ MII_BCM54XX_EXP_SEL, 0x6038 },
{ MII_BCM54XX_EXP_DATA, 0xFFC5 },
/* reduce RX high pass filter corner frequency */
{ MII_BCM54XX_EXP_SEL, 0x003a },
{ MII_BCM54XX_EXP_DATA, 0x2002 },
};
unsigned int i;
for (i = 0; i < ARRAY_SIZE(bcm7445_regs_cfg); i++) {
ret = phy_write(phydev,
bcm7445_regs_cfg[i].reg,
bcm7445_regs_cfg[i].value);
if (ret)
return ret;
}
return 0;
}
static void phy_write_exp(struct phy_device *phydev,
u16 reg, u16 value)
{
phy_write(phydev, MII_BCM54XX_EXP_SEL, MII_BCM54XX_EXP_SEL_ER | reg);
phy_write(phydev, MII_BCM54XX_EXP_DATA, value);
}
static void phy_write_misc(struct phy_device *phydev,
u16 reg, u16 chl, u16 value)
{
int tmp;
phy_write(phydev, MII_BCM54XX_AUX_CTL, MII_BCM54XX_AUXCTL_SHDWSEL_MISC);
tmp = phy_read(phydev, MII_BCM54XX_AUX_CTL);
tmp |= MII_BCM54XX_AUXCTL_ACTL_SMDSP_ENA;
phy_write(phydev, MII_BCM54XX_AUX_CTL, tmp);
tmp = (chl * MII_BCM7XXX_CHANNEL_WIDTH) | reg;
phy_write(phydev, MII_BCM54XX_EXP_SEL, tmp);
phy_write(phydev, MII_BCM54XX_EXP_DATA, value);
}
static int bcm7xxx_28nm_afe_config_init(struct phy_device *phydev)
{
/* write AFE_RXCONFIG_0 */
phy_write_misc(phydev, 0x38, 0x0000, 0xeb19);
/* write AFE_RXCONFIG_1 */
phy_write_misc(phydev, 0x38, 0x0001, 0x9a3f);
/* write AFE_RX_LP_COUNTER */
phy_write_misc(phydev, 0x38, 0x0003, 0x7fc7);
/* write AFE_HPF_TRIM_OTHERS */
phy_write_misc(phydev, 0x3A, 0x0000, 0x000b);
/* write AFTE_TX_CONFIG */
phy_write_misc(phydev, 0x39, 0x0000, 0x0800);
/* Increase VCO range to prevent unlocking problem of PLL at low
* temp
*/
phy_write_misc(phydev, 0x0032, 0x0001, 0x0048);
/* Change Ki to 011 */
phy_write_misc(phydev, 0x0032, 0x0002, 0x021b);
/* Disable loading of TVCO buffer to bandgap, set bandgap trim
* to 111
*/
phy_write_misc(phydev, 0x0033, 0x0000, 0x0e20);
/* Adjust bias current trim by -3 */
phy_write_misc(phydev, 0x000a, 0x0000, 0x690b);
/* Switch to CORE_BASE1E */
phy_write(phydev, MII_BCM7XXX_CORE_BASE1E, 0xd);
/* Reset R_CAL/RC_CAL Engine */
phy_write_exp(phydev, 0x00b0, 0x0010);
/* Disable Reset R_CAL/RC_CAL Engine */
phy_write_exp(phydev, 0x00b0, 0x0000);
return 0;
}
static int bcm7xxx_28nm_config_init(struct phy_device *phydev)
{
int ret;
ret = bcm7445_config_init(phydev);
if (ret)
return ret;
return bcm7xxx_28nm_afe_config_init(phydev);
}
static int phy_set_clr_bits(struct phy_device *dev, int location,
int set_mask, int clr_mask)
{
int v, ret;
v = phy_read(dev, location);
if (v < 0)
return v;
v &= ~clr_mask;
v |= set_mask;
ret = phy_write(dev, location, v);
if (ret < 0)
return ret;
return v;
}
static int bcm7xxx_config_init(struct phy_device *phydev)
{
int ret;
/* Enable 64 clock MDIO */
phy_write(phydev, MII_BCM7XXX_AUX_MODE, MII_BCM7XX_64CLK_MDIO);
phy_read(phydev, MII_BCM7XXX_AUX_MODE);
/* Workaround only required for 100Mbits/sec */
if (!(phydev->dev_flags & PHY_BRCM_100MBPS_WAR))
return 0;
/* set shadow mode 2 */
ret = phy_set_clr_bits(phydev, MII_BCM7XXX_TEST,
MII_BCM7XXX_SHD_MODE_2, MII_BCM7XXX_SHD_MODE_2);
if (ret < 0)
return ret;
/* set iddq_clkbias */
phy_write(phydev, MII_BCM7XXX_100TX_DISC, 0x0F00);
udelay(10);
/* reset iddq_clkbias */
phy_write(phydev, MII_BCM7XXX_100TX_DISC, 0x0C00);
phy_write(phydev, MII_BCM7XXX_100TX_FALSE_CAR, 0x7555);
/* reset shadow mode 2 */
ret = phy_set_clr_bits(phydev, MII_BCM7XXX_TEST, MII_BCM7XXX_SHD_MODE_2, 0);
if (ret < 0)
return ret;
return 0;
}
/* Workaround for putting the PHY in IDDQ mode, required
* for all BCM7XXX PHYs
*/
static int bcm7xxx_suspend(struct phy_device *phydev)
{
int ret;
const struct bcm7xxx_regs {
int reg;
u16 value;
} bcm7xxx_suspend_cfg[] = {
{ MII_BCM7XXX_TEST, 0x008b },
{ MII_BCM7XXX_100TX_AUX_CTL, 0x01c0 },
{ MII_BCM7XXX_100TX_DISC, 0x7000 },
{ MII_BCM7XXX_TEST, 0x000f },
{ MII_BCM7XXX_100TX_AUX_CTL, 0x20d0 },
{ MII_BCM7XXX_TEST, 0x000b },
};
unsigned int i;
for (i = 0; i < ARRAY_SIZE(bcm7xxx_suspend_cfg); i++) {
ret = phy_write(phydev,
bcm7xxx_suspend_cfg[i].reg,
bcm7xxx_suspend_cfg[i].value);
if (ret)
return ret;
}
return 0;
}
static int bcm7xxx_dummy_config_init(struct phy_device *phydev)
{
return 0;
}
static struct phy_driver bcm7xxx_driver[] = {
{
.phy_id = PHY_ID_BCM7366,
.phy_id_mask = 0xfffffff0,
.name = "Broadcom BCM7366",
.features = PHY_GBIT_FEATURES |
SUPPORTED_Pause | SUPPORTED_Asym_Pause,
.flags = PHY_IS_INTERNAL,
.config_init = bcm7xxx_28nm_afe_config_init,
.config_aneg = genphy_config_aneg,
.read_status = genphy_read_status,
.suspend = bcm7xxx_suspend,
.resume = bcm7xxx_28nm_afe_config_init,
.driver = { .owner = THIS_MODULE },
}, {
.phy_id = PHY_ID_BCM7439,
.phy_id_mask = 0xfffffff0,
.name = "Broadcom BCM7439",
.features = PHY_GBIT_FEATURES |
SUPPORTED_Pause | SUPPORTED_Asym_Pause,
.flags = PHY_IS_INTERNAL,
.config_init = bcm7xxx_28nm_afe_config_init,
.config_aneg = genphy_config_aneg,
.read_status = genphy_read_status,
.suspend = bcm7xxx_suspend,
.resume = bcm7xxx_28nm_afe_config_init,
.driver = { .owner = THIS_MODULE },
}, {
.phy_id = PHY_ID_BCM7445,
.phy_id_mask = 0xfffffff0,
.name = "Broadcom BCM7445",
.features = PHY_GBIT_FEATURES |
SUPPORTED_Pause | SUPPORTED_Asym_Pause,
.flags = PHY_IS_INTERNAL,
.config_init = bcm7xxx_28nm_config_init,
.config_aneg = genphy_config_aneg,
.read_status = genphy_read_status,
.suspend = bcm7xxx_suspend,
.resume = bcm7xxx_28nm_config_init,
.driver = { .owner = THIS_MODULE },
}, {
.name = "Broadcom BCM7XXX 28nm",
.phy_id = PHY_ID_BCM7XXX_28,
.phy_id_mask = PHY_BCM_OUI_MASK,
.features = PHY_GBIT_FEATURES |
SUPPORTED_Pause | SUPPORTED_Asym_Pause,
.flags = PHY_IS_INTERNAL,
.config_init = bcm7xxx_28nm_config_init,
.config_aneg = genphy_config_aneg,
.read_status = genphy_read_status,
.suspend = bcm7xxx_suspend,
.resume = bcm7xxx_28nm_config_init,
.driver = { .owner = THIS_MODULE },
}, {
.phy_id = PHY_BCM_OUI_4,
.phy_id_mask = 0xffff0000,
.name = "Broadcom BCM7XXX 40nm",
.features = PHY_GBIT_FEATURES |
SUPPORTED_Pause | SUPPORTED_Asym_Pause,
.flags = PHY_IS_INTERNAL,
.config_init = bcm7xxx_config_init,
.config_aneg = genphy_config_aneg,
.read_status = genphy_read_status,
.suspend = bcm7xxx_suspend,
.resume = bcm7xxx_config_init,
.driver = { .owner = THIS_MODULE },
}, {
.phy_id = PHY_BCM_OUI_5,
.phy_id_mask = 0xffffff00,
.name = "Broadcom BCM7XXX 65nm",
.features = PHY_BASIC_FEATURES |
SUPPORTED_Pause | SUPPORTED_Asym_Pause,
.flags = PHY_IS_INTERNAL,
.config_init = bcm7xxx_dummy_config_init,
.config_aneg = genphy_config_aneg,
.read_status = genphy_read_status,
.suspend = bcm7xxx_suspend,
.resume = bcm7xxx_config_init,
.driver = { .owner = THIS_MODULE },
} };
static struct mdio_device_id __maybe_unused bcm7xxx_tbl[] = {
{ PHY_ID_BCM7366, 0xfffffff0, },
{ PHY_ID_BCM7439, 0xfffffff0, },
{ PHY_ID_BCM7445, 0xfffffff0, },
{ PHY_ID_BCM7XXX_28, 0xfffffc00 },
{ PHY_BCM_OUI_4, 0xffff0000 },
{ PHY_BCM_OUI_5, 0xffffff00 },
{ }
};
static int __init bcm7xxx_phy_init(void)
{
return phy_drivers_register(bcm7xxx_driver,
ARRAY_SIZE(bcm7xxx_driver));
}
static void __exit bcm7xxx_phy_exit(void)
{
phy_drivers_unregister(bcm7xxx_driver,
ARRAY_SIZE(bcm7xxx_driver));
}
module_init(bcm7xxx_phy_init);
module_exit(bcm7xxx_phy_exit);
MODULE_DEVICE_TABLE(mdio, bcm7xxx_tbl);
MODULE_DESCRIPTION("Broadcom BCM7xxx internal PHY driver");
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Broadcom Corporation");
...@@ -25,58 +25,6 @@ ...@@ -25,58 +25,6 @@
#define BRCM_PHY_REV(phydev) \ #define BRCM_PHY_REV(phydev) \
((phydev)->drv->phy_id & ~((phydev)->drv->phy_id_mask)) ((phydev)->drv->phy_id & ~((phydev)->drv->phy_id_mask))
#define MII_BCM54XX_ECR 0x10 /* BCM54xx extended control register */
#define MII_BCM54XX_ECR_IM 0x1000 /* Interrupt mask */
#define MII_BCM54XX_ECR_IF 0x0800 /* Interrupt force */
#define MII_BCM54XX_ESR 0x11 /* BCM54xx extended status register */
#define MII_BCM54XX_ESR_IS 0x1000 /* Interrupt status */
#define MII_BCM54XX_EXP_DATA 0x15 /* Expansion register data */
#define MII_BCM54XX_EXP_SEL 0x17 /* Expansion register select */
#define MII_BCM54XX_EXP_SEL_SSD 0x0e00 /* Secondary SerDes select */
#define MII_BCM54XX_EXP_SEL_ER 0x0f00 /* Expansion register select */
#define MII_BCM54XX_AUX_CTL 0x18 /* Auxiliary control register */
#define MII_BCM54XX_ISR 0x1a /* BCM54xx interrupt status register */
#define MII_BCM54XX_IMR 0x1b /* BCM54xx interrupt mask register */
#define MII_BCM54XX_INT_CRCERR 0x0001 /* CRC error */
#define MII_BCM54XX_INT_LINK 0x0002 /* Link status changed */
#define MII_BCM54XX_INT_SPEED 0x0004 /* Link speed change */
#define MII_BCM54XX_INT_DUPLEX 0x0008 /* Duplex mode changed */
#define MII_BCM54XX_INT_LRS 0x0010 /* Local receiver status changed */
#define MII_BCM54XX_INT_RRS 0x0020 /* Remote receiver status changed */
#define MII_BCM54XX_INT_SSERR 0x0040 /* Scrambler synchronization error */
#define MII_BCM54XX_INT_UHCD 0x0080 /* Unsupported HCD negotiated */
#define MII_BCM54XX_INT_NHCD 0x0100 /* No HCD */
#define MII_BCM54XX_INT_NHCDL 0x0200 /* No HCD link */
#define MII_BCM54XX_INT_ANPR 0x0400 /* Auto-negotiation page received */
#define MII_BCM54XX_INT_LC 0x0800 /* All counters below 128 */
#define MII_BCM54XX_INT_HC 0x1000 /* Counter above 32768 */
#define MII_BCM54XX_INT_MDIX 0x2000 /* MDIX status change */
#define MII_BCM54XX_INT_PSERR 0x4000 /* Pair swap error */
#define MII_BCM54XX_SHD 0x1c /* 0x1c shadow registers */
#define MII_BCM54XX_SHD_WRITE 0x8000
#define MII_BCM54XX_SHD_VAL(x) ((x & 0x1f) << 10)
#define MII_BCM54XX_SHD_DATA(x) ((x & 0x3ff) << 0)
/*
* AUXILIARY CONTROL SHADOW ACCESS REGISTERS. (PHY REG 0x18)
*/
#define MII_BCM54XX_AUXCTL_SHDWSEL_AUXCTL 0x0000
#define MII_BCM54XX_AUXCTL_ACTL_TX_6DB 0x0400
#define MII_BCM54XX_AUXCTL_ACTL_SMDSP_ENA 0x0800
#define MII_BCM54XX_AUXCTL_MISC_WREN 0x8000
#define MII_BCM54XX_AUXCTL_MISC_FORCE_AMDIX 0x0200
#define MII_BCM54XX_AUXCTL_MISC_RDSEL_MISC 0x7000
#define MII_BCM54XX_AUXCTL_SHDWSEL_MISC 0x0007
#define MII_BCM54XX_AUXCTL_SHDWSEL_AUXCTL 0x0000
/* /*
* Broadcom LED source encodings. These are used in BCM5461, BCM5481, * Broadcom LED source encodings. These are used in BCM5461, BCM5481,
* BCM5482, and possibly some others. * BCM5482, and possibly some others.
......
...@@ -305,7 +305,10 @@ int phy_ethtool_gset(struct phy_device *phydev, struct ethtool_cmd *cmd) ...@@ -305,7 +305,10 @@ int phy_ethtool_gset(struct phy_device *phydev, struct ethtool_cmd *cmd)
ethtool_cmd_speed_set(cmd, phydev->speed); ethtool_cmd_speed_set(cmd, phydev->speed);
cmd->duplex = phydev->duplex; cmd->duplex = phydev->duplex;
cmd->port = PORT_MII; if (phydev->interface == PHY_INTERFACE_MODE_MOCA)
cmd->port = PORT_BNC;
else
cmd->port = PORT_MII;
cmd->phy_address = phydev->addr; cmd->phy_address = phydev->addr;
cmd->transceiver = phy_is_internal(phydev) ? cmd->transceiver = phy_is_internal(phydev) ?
XCVR_INTERNAL : XCVR_EXTERNAL; XCVR_INTERNAL : XCVR_EXTERNAL;
......
...@@ -13,10 +13,17 @@ ...@@ -13,10 +13,17 @@
#define PHY_ID_BCM5461 0x002060c0 #define PHY_ID_BCM5461 0x002060c0
#define PHY_ID_BCM57780 0x03625d90 #define PHY_ID_BCM57780 0x03625d90
#define PHY_ID_BCM7366 0x600d8490
#define PHY_ID_BCM7439 0x600d8480
#define PHY_ID_BCM7445 0x600d8510
#define PHY_ID_BCM7XXX_28 0x600d8400
#define PHY_BCM_OUI_MASK 0xfffffc00 #define PHY_BCM_OUI_MASK 0xfffffc00
#define PHY_BCM_OUI_1 0x00206000 #define PHY_BCM_OUI_1 0x00206000
#define PHY_BCM_OUI_2 0x0143bc00 #define PHY_BCM_OUI_2 0x0143bc00
#define PHY_BCM_OUI_3 0x03625c00 #define PHY_BCM_OUI_3 0x03625c00
#define PHY_BCM_OUI_4 0x600d0000
#define PHY_BCM_OUI_5 0x03625e00
#define PHY_BCM_FLAGS_MODE_COPPER 0x00000001 #define PHY_BCM_FLAGS_MODE_COPPER 0x00000001
...@@ -31,6 +38,59 @@ ...@@ -31,6 +38,59 @@
#define PHY_BRCM_EXT_IBND_TX_ENABLE 0x00002000 #define PHY_BRCM_EXT_IBND_TX_ENABLE 0x00002000
#define PHY_BRCM_CLEAR_RGMII_MODE 0x00004000 #define PHY_BRCM_CLEAR_RGMII_MODE 0x00004000
#define PHY_BRCM_DIS_TXCRXC_NOENRGY 0x00008000 #define PHY_BRCM_DIS_TXCRXC_NOENRGY 0x00008000
/* Broadcom BCM7xxx specific workarounds */
#define PHY_BRCM_100MBPS_WAR 0x00010000
#define PHY_BCM_FLAGS_VALID 0x80000000 #define PHY_BCM_FLAGS_VALID 0x80000000
/* Broadcom BCM54XX register definitions, common to most Broadcom PHYs */
#define MII_BCM54XX_ECR 0x10 /* BCM54xx extended control register */
#define MII_BCM54XX_ECR_IM 0x1000 /* Interrupt mask */
#define MII_BCM54XX_ECR_IF 0x0800 /* Interrupt force */
#define MII_BCM54XX_ESR 0x11 /* BCM54xx extended status register */
#define MII_BCM54XX_ESR_IS 0x1000 /* Interrupt status */
#define MII_BCM54XX_EXP_DATA 0x15 /* Expansion register data */
#define MII_BCM54XX_EXP_SEL 0x17 /* Expansion register select */
#define MII_BCM54XX_EXP_SEL_SSD 0x0e00 /* Secondary SerDes select */
#define MII_BCM54XX_EXP_SEL_ER 0x0f00 /* Expansion register select */
#define MII_BCM54XX_AUX_CTL 0x18 /* Auxiliary control register */
#define MII_BCM54XX_ISR 0x1a /* BCM54xx interrupt status register */
#define MII_BCM54XX_IMR 0x1b /* BCM54xx interrupt mask register */
#define MII_BCM54XX_INT_CRCERR 0x0001 /* CRC error */
#define MII_BCM54XX_INT_LINK 0x0002 /* Link status changed */
#define MII_BCM54XX_INT_SPEED 0x0004 /* Link speed change */
#define MII_BCM54XX_INT_DUPLEX 0x0008 /* Duplex mode changed */
#define MII_BCM54XX_INT_LRS 0x0010 /* Local receiver status changed */
#define MII_BCM54XX_INT_RRS 0x0020 /* Remote receiver status changed */
#define MII_BCM54XX_INT_SSERR 0x0040 /* Scrambler synchronization error */
#define MII_BCM54XX_INT_UHCD 0x0080 /* Unsupported HCD negotiated */
#define MII_BCM54XX_INT_NHCD 0x0100 /* No HCD */
#define MII_BCM54XX_INT_NHCDL 0x0200 /* No HCD link */
#define MII_BCM54XX_INT_ANPR 0x0400 /* Auto-negotiation page received */
#define MII_BCM54XX_INT_LC 0x0800 /* All counters below 128 */
#define MII_BCM54XX_INT_HC 0x1000 /* Counter above 32768 */
#define MII_BCM54XX_INT_MDIX 0x2000 /* MDIX status change */
#define MII_BCM54XX_INT_PSERR 0x4000 /* Pair swap error */
#define MII_BCM54XX_SHD 0x1c /* 0x1c shadow registers */
#define MII_BCM54XX_SHD_WRITE 0x8000
#define MII_BCM54XX_SHD_VAL(x) ((x & 0x1f) << 10)
#define MII_BCM54XX_SHD_DATA(x) ((x & 0x3ff) << 0)
/*
* AUXILIARY CONTROL SHADOW ACCESS REGISTERS. (PHY REG 0x18)
*/
#define MII_BCM54XX_AUXCTL_SHDWSEL_AUXCTL 0x0000
#define MII_BCM54XX_AUXCTL_ACTL_TX_6DB 0x0400
#define MII_BCM54XX_AUXCTL_ACTL_SMDSP_ENA 0x0800
#define MII_BCM54XX_AUXCTL_MISC_WREN 0x8000
#define MII_BCM54XX_AUXCTL_MISC_FORCE_AMDIX 0x0200
#define MII_BCM54XX_AUXCTL_MISC_RDSEL_MISC 0x7000
#define MII_BCM54XX_AUXCTL_SHDWSEL_MISC 0x0007
#define MII_BCM54XX_AUXCTL_SHDWSEL_AUXCTL 0x0000
#endif /* _LINUX_BRCMPHY_H */ #endif /* _LINUX_BRCMPHY_H */
...@@ -74,6 +74,7 @@ typedef enum { ...@@ -74,6 +74,7 @@ typedef enum {
PHY_INTERFACE_MODE_RTBI, PHY_INTERFACE_MODE_RTBI,
PHY_INTERFACE_MODE_SMII, PHY_INTERFACE_MODE_SMII,
PHY_INTERFACE_MODE_XGMII, PHY_INTERFACE_MODE_XGMII,
PHY_INTERFACE_MODE_MOCA,
PHY_INTERFACE_MODE_MAX, PHY_INTERFACE_MODE_MAX,
} phy_interface_t; } phy_interface_t;
...@@ -113,6 +114,8 @@ static inline const char *phy_modes(phy_interface_t interface) ...@@ -113,6 +114,8 @@ static inline const char *phy_modes(phy_interface_t interface)
return "smii"; return "smii";
case PHY_INTERFACE_MODE_XGMII: case PHY_INTERFACE_MODE_XGMII:
return "xgmii"; return "xgmii";
case PHY_INTERFACE_MODE_MOCA:
return "moca";
default: default:
return "unknown"; return "unknown";
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment