Commit 96524ea4 authored by David S. Miller's avatar David S. Miller

Merge branch 'Xilinx-axienet-driver-updates'

Robert Hancock says:

====================
Xilinx axienet driver updates (v5)

This is a series of enhancements and bug fixes in order to get the mainline
version of this driver into a more generally usable state, including on
x86 or ARM platforms. It also converts the driver to use the phylink API
in order to provide support for SFP modules.

Changes since v4:
-Use reverse christmas tree variable order
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 40ae2550 f5203a3d
...@@ -17,8 +17,15 @@ For more details about mdio please refer phy.txt file in the same directory. ...@@ -17,8 +17,15 @@ For more details about mdio please refer phy.txt file in the same directory.
Required properties: Required properties:
- compatible : Must be one of "xlnx,axi-ethernet-1.00.a", - compatible : Must be one of "xlnx,axi-ethernet-1.00.a",
"xlnx,axi-ethernet-1.01.a", "xlnx,axi-ethernet-2.01.a" "xlnx,axi-ethernet-1.01.a", "xlnx,axi-ethernet-2.01.a"
- reg : Address and length of the IO space. - reg : Address and length of the IO space, as well as the address
- interrupts : Should be a list of two interrupt, TX and RX. and length of the AXI DMA controller IO space, unless
axistream-connected is specified, in which case the reg
attribute of the node referenced by it is used.
- interrupts : Should be a list of 2 or 3 interrupts: TX DMA, RX DMA,
and optionally Ethernet core. If axistream-connected is
specified, the TX/RX DMA interrupts should be on that node
instead, and only the Ethernet core interrupt is optionally
specified here.
- phy-handle : Should point to the external phy device. - phy-handle : Should point to the external phy device.
See ethernet.txt file in the same directory. See ethernet.txt file in the same directory.
- xlnx,rxmem : Set to allocated memory buffer for Rx/Tx in the hardware - xlnx,rxmem : Set to allocated memory buffer for Rx/Tx in the hardware
...@@ -31,15 +38,29 @@ Optional properties: ...@@ -31,15 +38,29 @@ Optional properties:
1 to enable partial TX checksum offload, 1 to enable partial TX checksum offload,
2 to enable full TX checksum offload 2 to enable full TX checksum offload
- xlnx,rxcsum : Same values as xlnx,txcsum but for RX checksum offload - xlnx,rxcsum : Same values as xlnx,txcsum but for RX checksum offload
- clocks : AXI bus clock for the device. Refer to common clock bindings.
Used to calculate MDIO clock divisor. If not specified, it is
auto-detected from the CPU clock (but only on platforms where
this is possible). New device trees should specify this - the
auto detection is only for backward compatibility.
- axistream-connected: Reference to another node which contains the resources
for the AXI DMA controller used by this device.
If this is specified, the DMA-related resources from that
device (DMA registers and DMA TX/RX interrupts) rather
than this one will be used.
- mdio : Child node for MDIO bus. Must be defined if PHY access is
required through the core's MDIO interface (i.e. always,
unless the PHY is accessed through a different bus).
Example: Example:
axi_ethernet_eth: ethernet@40c00000 { axi_ethernet_eth: ethernet@40c00000 {
compatible = "xlnx,axi-ethernet-1.00.a"; compatible = "xlnx,axi-ethernet-1.00.a";
device_type = "network"; device_type = "network";
interrupt-parent = <&microblaze_0_axi_intc>; interrupt-parent = <&microblaze_0_axi_intc>;
interrupts = <2 0>; interrupts = <2 0 1>;
clocks = <&axi_clk>;
phy-mode = "mii"; phy-mode = "mii";
reg = <0x40c00000 0x40000>; reg = <0x40c00000 0x40000 0x50c00000 0x40000>;
xlnx,rxcsum = <0x2>; xlnx,rxcsum = <0x2>;
xlnx,rxmem = <0x800>; xlnx,rxmem = <0x800>;
xlnx,txcsum = <0x2>; xlnx,txcsum = <0x2>;
......
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
config NET_VENDOR_XILINX config NET_VENDOR_XILINX
bool "Xilinx devices" bool "Xilinx devices"
default y default y
depends on PPC || PPC32 || MICROBLAZE || ARCH_ZYNQ || MIPS || X86 || COMPILE_TEST depends on PPC || PPC32 || MICROBLAZE || ARCH_ZYNQ || MIPS || X86 || ARM || COMPILE_TEST
---help--- ---help---
If you have a network (Ethernet) card belonging to this class, say Y. If you have a network (Ethernet) card belonging to this class, say Y.
...@@ -26,8 +26,8 @@ config XILINX_EMACLITE ...@@ -26,8 +26,8 @@ config XILINX_EMACLITE
config XILINX_AXI_EMAC config XILINX_AXI_EMAC
tristate "Xilinx 10/100/1000 AXI Ethernet support" tristate "Xilinx 10/100/1000 AXI Ethernet support"
depends on MICROBLAZE depends on MICROBLAZE || X86 || ARM || COMPILE_TEST
select PHYLIB select PHYLINK
---help--- ---help---
This driver supports the 10/100/1000 Ethernet from Xilinx for the This driver supports the 10/100/1000 Ethernet from Xilinx for the
AXI bus interface used in Xilinx Virtex FPGAs. AXI bus interface used in Xilinx Virtex FPGAs.
......
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/if_vlan.h> #include <linux/if_vlan.h>
#include <linux/phylink.h>
/* Packet size info */ /* Packet size info */
#define XAE_HDR_SIZE 14 /* Size of Ethernet header */ #define XAE_HDR_SIZE 14 /* Size of Ethernet header */
...@@ -83,6 +84,8 @@ ...@@ -83,6 +84,8 @@
#define XAXIDMA_CR_RUNSTOP_MASK 0x00000001 /* Start/stop DMA channel */ #define XAXIDMA_CR_RUNSTOP_MASK 0x00000001 /* Start/stop DMA channel */
#define XAXIDMA_CR_RESET_MASK 0x00000004 /* Reset DMA engine */ #define XAXIDMA_CR_RESET_MASK 0x00000004 /* Reset DMA engine */
#define XAXIDMA_SR_HALT_MASK 0x00000001 /* Indicates DMA channel halted */
#define XAXIDMA_BD_NDESC_OFFSET 0x00 /* Next descriptor pointer */ #define XAXIDMA_BD_NDESC_OFFSET 0x00 /* Next descriptor pointer */
#define XAXIDMA_BD_BUFA_OFFSET 0x08 /* Buffer address */ #define XAXIDMA_BD_BUFA_OFFSET 0x08 /* Buffer address */
#define XAXIDMA_BD_CTRL_LEN_OFFSET 0x18 /* Control/buffer length */ #define XAXIDMA_BD_CTRL_LEN_OFFSET 0x18 /* Control/buffer length */
...@@ -356,9 +359,6 @@ ...@@ -356,9 +359,6 @@
* @app2: MM2S/S2MM User Application Field 2. * @app2: MM2S/S2MM User Application Field 2.
* @app3: MM2S/S2MM User Application Field 3. * @app3: MM2S/S2MM User Application Field 3.
* @app4: MM2S/S2MM User Application Field 4. * @app4: MM2S/S2MM User Application Field 4.
* @sw_id_offset: MM2S/S2MM Sw ID
* @reserved5: Reserved and not used
* @reserved6: Reserved and not used
*/ */
struct axidma_bd { struct axidma_bd {
u32 next; /* Physical address of next buffer descriptor */ u32 next; /* Physical address of next buffer descriptor */
...@@ -373,11 +373,9 @@ struct axidma_bd { ...@@ -373,11 +373,9 @@ struct axidma_bd {
u32 app1; /* TX start << 16 | insert */ u32 app1; /* TX start << 16 | insert */
u32 app2; /* TX csum seed */ u32 app2; /* TX csum seed */
u32 app3; u32 app3;
u32 app4; u32 app4; /* Last field used by HW */
u32 sw_id_offset; struct sk_buff *skb;
u32 reserved5; } __aligned(XAXIDMA_BD_MINIMUM_ALIGNMENT);
u32 reserved6;
};
/** /**
* struct axienet_local - axienet private per device data * struct axienet_local - axienet private per device data
...@@ -385,6 +383,7 @@ struct axidma_bd { ...@@ -385,6 +383,7 @@ struct axidma_bd {
* @dev: Pointer to device structure * @dev: Pointer to device structure
* @phy_node: Pointer to device node structure * @phy_node: Pointer to device node structure
* @mii_bus: Pointer to MII bus structure * @mii_bus: Pointer to MII bus structure
* @regs_start: Resource start for axienet device addresses
* @regs: Base address for the axienet_local device address space * @regs: Base address for the axienet_local device address space
* @dma_regs: Base address for the axidma device address space * @dma_regs: Base address for the axidma device address space
* @dma_err_tasklet: Tasklet structure to process Axi DMA errors * @dma_err_tasklet: Tasklet structure to process Axi DMA errors
...@@ -422,10 +421,17 @@ struct axienet_local { ...@@ -422,10 +421,17 @@ struct axienet_local {
/* Connection to PHY device */ /* Connection to PHY device */
struct device_node *phy_node; struct device_node *phy_node;
struct phylink *phylink;
struct phylink_config phylink_config;
/* Clock for AXI bus */
struct clk *clk;
/* MDIO bus data */ /* MDIO bus data */
struct mii_bus *mii_bus; /* MII bus reference */ struct mii_bus *mii_bus; /* MII bus reference */
/* IO registers, dma functions and IRQs */ /* IO registers, dma functions and IRQs */
resource_size_t regs_start;
void __iomem *regs; void __iomem *regs;
void __iomem *dma_regs; void __iomem *dma_regs;
...@@ -433,17 +439,19 @@ struct axienet_local { ...@@ -433,17 +439,19 @@ struct axienet_local {
int tx_irq; int tx_irq;
int rx_irq; int rx_irq;
int eth_irq;
phy_interface_t phy_mode; phy_interface_t phy_mode;
u32 options; /* Current options word */ u32 options; /* Current options word */
u32 last_link;
u32 features; u32 features;
/* Buffer descriptors */ /* Buffer descriptors */
struct axidma_bd *tx_bd_v; struct axidma_bd *tx_bd_v;
dma_addr_t tx_bd_p; dma_addr_t tx_bd_p;
u32 tx_bd_num;
struct axidma_bd *rx_bd_v; struct axidma_bd *rx_bd_v;
dma_addr_t rx_bd_p; dma_addr_t rx_bd_p;
u32 rx_bd_num;
u32 tx_bd_ci; u32 tx_bd_ci;
u32 tx_bd_tail; u32 tx_bd_tail;
u32 rx_bd_ci; u32 rx_bd_ci;
...@@ -481,7 +489,7 @@ struct axienet_option { ...@@ -481,7 +489,7 @@ struct axienet_option {
*/ */
static inline u32 axienet_ior(struct axienet_local *lp, off_t offset) static inline u32 axienet_ior(struct axienet_local *lp, off_t offset)
{ {
return in_be32(lp->regs + offset); return ioread32(lp->regs + offset);
} }
static inline u32 axinet_ior_read_mcr(struct axienet_local *lp) static inline u32 axinet_ior_read_mcr(struct axienet_local *lp)
...@@ -501,12 +509,13 @@ static inline u32 axinet_ior_read_mcr(struct axienet_local *lp) ...@@ -501,12 +509,13 @@ static inline u32 axinet_ior_read_mcr(struct axienet_local *lp)
static inline void axienet_iow(struct axienet_local *lp, off_t offset, static inline void axienet_iow(struct axienet_local *lp, off_t offset,
u32 value) u32 value)
{ {
out_be32((lp->regs + offset), value); iowrite32(value, lp->regs + offset);
} }
/* Function prototypes visible in xilinx_axienet_mdio.c for other files */ /* Function prototypes visible in xilinx_axienet_mdio.c for other files */
int axienet_mdio_setup(struct axienet_local *lp, struct device_node *np); int axienet_mdio_enable(struct axienet_local *lp);
int axienet_mdio_wait_until_ready(struct axienet_local *lp); void axienet_mdio_disable(struct axienet_local *lp);
int axienet_mdio_setup(struct axienet_local *lp);
void axienet_mdio_teardown(struct axienet_local *lp); void axienet_mdio_teardown(struct axienet_local *lp);
#endif /* XILINX_AXI_ENET_H */ #endif /* XILINX_AXI_ENET_H */
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
* Copyright (c) 2008-2009 Secret Lab Technologies Ltd. * Copyright (c) 2008-2009 Secret Lab Technologies Ltd.
* Copyright (c) 2010 - 2011 Michal Simek <monstr@monstr.eu> * Copyright (c) 2010 - 2011 Michal Simek <monstr@monstr.eu>
* Copyright (c) 2010 - 2011 PetaLogix * Copyright (c) 2010 - 2011 PetaLogix
* Copyright (c) 2019 SED Systems, a division of Calian Ltd.
* Copyright (c) 2010 - 2012 Xilinx, Inc. All rights reserved. * Copyright (c) 2010 - 2012 Xilinx, Inc. All rights reserved.
* *
* This is a driver for the Xilinx Axi Ethernet which is used in the Virtex6 * This is a driver for the Xilinx Axi Ethernet which is used in the Virtex6
...@@ -21,6 +22,7 @@ ...@@ -21,6 +22,7 @@
* - Add support for extended VLAN support. * - Add support for extended VLAN support.
*/ */
#include <linux/clk.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/etherdevice.h> #include <linux/etherdevice.h>
#include <linux/module.h> #include <linux/module.h>
...@@ -38,16 +40,18 @@ ...@@ -38,16 +40,18 @@
#include "xilinx_axienet.h" #include "xilinx_axienet.h"
/* Descriptors defines for Tx and Rx DMA - 2^n for the best performance */ /* Descriptors defines for Tx and Rx DMA */
#define TX_BD_NUM 64 #define TX_BD_NUM_DEFAULT 64
#define RX_BD_NUM 128 #define RX_BD_NUM_DEFAULT 1024
#define TX_BD_NUM_MAX 4096
#define RX_BD_NUM_MAX 4096
/* Must be shorter than length of ethtool_drvinfo.driver field to fit */ /* Must be shorter than length of ethtool_drvinfo.driver field to fit */
#define DRIVER_NAME "xaxienet" #define DRIVER_NAME "xaxienet"
#define DRIVER_DESCRIPTION "Xilinx Axi Ethernet driver" #define DRIVER_DESCRIPTION "Xilinx Axi Ethernet driver"
#define DRIVER_VERSION "1.00a" #define DRIVER_VERSION "1.00a"
#define AXIENET_REGS_N 32 #define AXIENET_REGS_N 40
/* Match table for of_platform binding */ /* Match table for of_platform binding */
static const struct of_device_id axienet_of_match[] = { static const struct of_device_id axienet_of_match[] = {
...@@ -125,7 +129,7 @@ static struct axienet_option axienet_options[] = { ...@@ -125,7 +129,7 @@ static struct axienet_option axienet_options[] = {
*/ */
static inline u32 axienet_dma_in32(struct axienet_local *lp, off_t reg) static inline u32 axienet_dma_in32(struct axienet_local *lp, off_t reg)
{ {
return in_be32(lp->dma_regs + reg); return ioread32(lp->dma_regs + reg);
} }
/** /**
...@@ -140,7 +144,7 @@ static inline u32 axienet_dma_in32(struct axienet_local *lp, off_t reg) ...@@ -140,7 +144,7 @@ static inline u32 axienet_dma_in32(struct axienet_local *lp, off_t reg)
static inline void axienet_dma_out32(struct axienet_local *lp, static inline void axienet_dma_out32(struct axienet_local *lp,
off_t reg, u32 value) off_t reg, u32 value)
{ {
out_be32((lp->dma_regs + reg), value); iowrite32(value, lp->dma_regs + reg);
} }
/** /**
...@@ -156,22 +160,21 @@ static void axienet_dma_bd_release(struct net_device *ndev) ...@@ -156,22 +160,21 @@ static void axienet_dma_bd_release(struct net_device *ndev)
int i; int i;
struct axienet_local *lp = netdev_priv(ndev); struct axienet_local *lp = netdev_priv(ndev);
for (i = 0; i < RX_BD_NUM; i++) { for (i = 0; i < lp->rx_bd_num; i++) {
dma_unmap_single(ndev->dev.parent, lp->rx_bd_v[i].phys, dma_unmap_single(ndev->dev.parent, lp->rx_bd_v[i].phys,
lp->max_frm_size, DMA_FROM_DEVICE); lp->max_frm_size, DMA_FROM_DEVICE);
dev_kfree_skb((struct sk_buff *) dev_kfree_skb(lp->rx_bd_v[i].skb);
(lp->rx_bd_v[i].sw_id_offset));
} }
if (lp->rx_bd_v) { if (lp->rx_bd_v) {
dma_free_coherent(ndev->dev.parent, dma_free_coherent(ndev->dev.parent,
sizeof(*lp->rx_bd_v) * RX_BD_NUM, sizeof(*lp->rx_bd_v) * lp->rx_bd_num,
lp->rx_bd_v, lp->rx_bd_v,
lp->rx_bd_p); lp->rx_bd_p);
} }
if (lp->tx_bd_v) { if (lp->tx_bd_v) {
dma_free_coherent(ndev->dev.parent, dma_free_coherent(ndev->dev.parent,
sizeof(*lp->tx_bd_v) * TX_BD_NUM, sizeof(*lp->tx_bd_v) * lp->tx_bd_num,
lp->tx_bd_v, lp->tx_bd_v,
lp->tx_bd_p); lp->tx_bd_p);
} }
...@@ -201,33 +204,33 @@ static int axienet_dma_bd_init(struct net_device *ndev) ...@@ -201,33 +204,33 @@ static int axienet_dma_bd_init(struct net_device *ndev)
/* Allocate the Tx and Rx buffer descriptors. */ /* Allocate the Tx and Rx buffer descriptors. */
lp->tx_bd_v = dma_alloc_coherent(ndev->dev.parent, lp->tx_bd_v = dma_alloc_coherent(ndev->dev.parent,
sizeof(*lp->tx_bd_v) * TX_BD_NUM, sizeof(*lp->tx_bd_v) * lp->tx_bd_num,
&lp->tx_bd_p, GFP_KERNEL); &lp->tx_bd_p, GFP_KERNEL);
if (!lp->tx_bd_v) if (!lp->tx_bd_v)
goto out; goto out;
lp->rx_bd_v = dma_alloc_coherent(ndev->dev.parent, lp->rx_bd_v = dma_alloc_coherent(ndev->dev.parent,
sizeof(*lp->rx_bd_v) * RX_BD_NUM, sizeof(*lp->rx_bd_v) * lp->rx_bd_num,
&lp->rx_bd_p, GFP_KERNEL); &lp->rx_bd_p, GFP_KERNEL);
if (!lp->rx_bd_v) if (!lp->rx_bd_v)
goto out; goto out;
for (i = 0; i < TX_BD_NUM; i++) { for (i = 0; i < lp->tx_bd_num; i++) {
lp->tx_bd_v[i].next = lp->tx_bd_p + lp->tx_bd_v[i].next = lp->tx_bd_p +
sizeof(*lp->tx_bd_v) * sizeof(*lp->tx_bd_v) *
((i + 1) % TX_BD_NUM); ((i + 1) % lp->tx_bd_num);
} }
for (i = 0; i < RX_BD_NUM; i++) { for (i = 0; i < lp->rx_bd_num; i++) {
lp->rx_bd_v[i].next = lp->rx_bd_p + lp->rx_bd_v[i].next = lp->rx_bd_p +
sizeof(*lp->rx_bd_v) * sizeof(*lp->rx_bd_v) *
((i + 1) % RX_BD_NUM); ((i + 1) % lp->rx_bd_num);
skb = netdev_alloc_skb_ip_align(ndev, lp->max_frm_size); skb = netdev_alloc_skb_ip_align(ndev, lp->max_frm_size);
if (!skb) if (!skb)
goto out; goto out;
lp->rx_bd_v[i].sw_id_offset = (u32) skb; lp->rx_bd_v[i].skb = skb;
lp->rx_bd_v[i].phys = dma_map_single(ndev->dev.parent, lp->rx_bd_v[i].phys = dma_map_single(ndev->dev.parent,
skb->data, skb->data,
lp->max_frm_size, lp->max_frm_size,
...@@ -269,7 +272,7 @@ static int axienet_dma_bd_init(struct net_device *ndev) ...@@ -269,7 +272,7 @@ static int axienet_dma_bd_init(struct net_device *ndev)
axienet_dma_out32(lp, XAXIDMA_RX_CR_OFFSET, axienet_dma_out32(lp, XAXIDMA_RX_CR_OFFSET,
cr | XAXIDMA_CR_RUNSTOP_MASK); cr | XAXIDMA_CR_RUNSTOP_MASK);
axienet_dma_out32(lp, XAXIDMA_RX_TDESC_OFFSET, lp->rx_bd_p + axienet_dma_out32(lp, XAXIDMA_RX_TDESC_OFFSET, lp->rx_bd_p +
(sizeof(*lp->rx_bd_v) * (RX_BD_NUM - 1))); (sizeof(*lp->rx_bd_v) * (lp->rx_bd_num - 1)));
/* Write to the RS (Run-stop) bit in the Tx channel control register. /* Write to the RS (Run-stop) bit in the Tx channel control register.
* Tx channel is now ready to run. But only after we write to the * Tx channel is now ready to run. But only after we write to the
...@@ -434,17 +437,20 @@ static void axienet_setoptions(struct net_device *ndev, u32 options) ...@@ -434,17 +437,20 @@ static void axienet_setoptions(struct net_device *ndev, u32 options)
lp->options |= options; lp->options |= options;
} }
static void __axienet_device_reset(struct axienet_local *lp, off_t offset) static void __axienet_device_reset(struct axienet_local *lp)
{ {
u32 timeout; u32 timeout;
/* Reset Axi DMA. This would reset Axi Ethernet core as well. The reset /* Reset Axi DMA. This would reset Axi Ethernet core as well. The reset
* process of Axi DMA takes a while to complete as all pending * process of Axi DMA takes a while to complete as all pending
* commands/transfers will be flushed or completed during this * commands/transfers will be flushed or completed during this
* reset process. * reset process.
* Note that even though both TX and RX have their own reset register,
* they both reset the entire DMA core, so only one needs to be used.
*/ */
axienet_dma_out32(lp, offset, XAXIDMA_CR_RESET_MASK); axienet_dma_out32(lp, XAXIDMA_TX_CR_OFFSET, XAXIDMA_CR_RESET_MASK);
timeout = DELAY_OF_ONE_MILLISEC; timeout = DELAY_OF_ONE_MILLISEC;
while (axienet_dma_in32(lp, offset) & XAXIDMA_CR_RESET_MASK) { while (axienet_dma_in32(lp, XAXIDMA_TX_CR_OFFSET) &
XAXIDMA_CR_RESET_MASK) {
udelay(1); udelay(1);
if (--timeout == 0) { if (--timeout == 0) {
netdev_err(lp->ndev, "%s: DMA reset timeout!\n", netdev_err(lp->ndev, "%s: DMA reset timeout!\n",
...@@ -470,8 +476,7 @@ static void axienet_device_reset(struct net_device *ndev) ...@@ -470,8 +476,7 @@ static void axienet_device_reset(struct net_device *ndev)
u32 axienet_status; u32 axienet_status;
struct axienet_local *lp = netdev_priv(ndev); struct axienet_local *lp = netdev_priv(ndev);
__axienet_device_reset(lp, XAXIDMA_TX_CR_OFFSET); __axienet_device_reset(lp);
__axienet_device_reset(lp, XAXIDMA_RX_CR_OFFSET);
lp->max_frm_size = XAE_MAX_VLAN_FRAME_SIZE; lp->max_frm_size = XAE_MAX_VLAN_FRAME_SIZE;
lp->options |= XAE_OPTION_VLAN; lp->options |= XAE_OPTION_VLAN;
...@@ -498,6 +503,8 @@ static void axienet_device_reset(struct net_device *ndev) ...@@ -498,6 +503,8 @@ static void axienet_device_reset(struct net_device *ndev)
axienet_status = axienet_ior(lp, XAE_IP_OFFSET); axienet_status = axienet_ior(lp, XAE_IP_OFFSET);
if (axienet_status & XAE_INT_RXRJECT_MASK) if (axienet_status & XAE_INT_RXRJECT_MASK)
axienet_iow(lp, XAE_IS_OFFSET, XAE_INT_RXRJECT_MASK); axienet_iow(lp, XAE_IS_OFFSET, XAE_INT_RXRJECT_MASK);
axienet_iow(lp, XAE_IE_OFFSET, lp->eth_irq > 0 ?
XAE_INT_RECV_ERROR_MASK : 0);
axienet_iow(lp, XAE_FCC_OFFSET, XAE_FCC_FCRX_MASK); axienet_iow(lp, XAE_FCC_OFFSET, XAE_FCC_FCRX_MASK);
...@@ -513,63 +520,6 @@ static void axienet_device_reset(struct net_device *ndev) ...@@ -513,63 +520,6 @@ static void axienet_device_reset(struct net_device *ndev)
netif_trans_update(ndev); netif_trans_update(ndev);
} }
/**
* axienet_adjust_link - Adjust the PHY link speed/duplex.
* @ndev: Pointer to the net_device structure
*
* This function is called to change the speed and duplex setting after
* auto negotiation is done by the PHY. This is the function that gets
* registered with the PHY interface through the "of_phy_connect" call.
*/
static void axienet_adjust_link(struct net_device *ndev)
{
u32 emmc_reg;
u32 link_state;
u32 setspeed = 1;
struct axienet_local *lp = netdev_priv(ndev);
struct phy_device *phy = ndev->phydev;
link_state = phy->speed | (phy->duplex << 1) | phy->link;
if (lp->last_link != link_state) {
if ((phy->speed == SPEED_10) || (phy->speed == SPEED_100)) {
if (lp->phy_mode == PHY_INTERFACE_MODE_1000BASEX)
setspeed = 0;
} else {
if ((phy->speed == SPEED_1000) &&
(lp->phy_mode == PHY_INTERFACE_MODE_MII))
setspeed = 0;
}
if (setspeed == 1) {
emmc_reg = axienet_ior(lp, XAE_EMMC_OFFSET);
emmc_reg &= ~XAE_EMMC_LINKSPEED_MASK;
switch (phy->speed) {
case SPEED_1000:
emmc_reg |= XAE_EMMC_LINKSPD_1000;
break;
case SPEED_100:
emmc_reg |= XAE_EMMC_LINKSPD_100;
break;
case SPEED_10:
emmc_reg |= XAE_EMMC_LINKSPD_10;
break;
default:
dev_err(&ndev->dev, "Speed other than 10, 100 "
"or 1Gbps is not supported\n");
break;
}
axienet_iow(lp, XAE_EMMC_OFFSET, emmc_reg);
lp->last_link = link_state;
phy_print_status(phy);
} else {
netdev_err(ndev,
"Error setting Axi Ethernet mac speed\n");
}
}
}
/** /**
* axienet_start_xmit_done - Invoked once a transmit is completed by the * axienet_start_xmit_done - Invoked once a transmit is completed by the
* Axi DMA Tx channel. * Axi DMA Tx channel.
...@@ -595,26 +545,31 @@ static void axienet_start_xmit_done(struct net_device *ndev) ...@@ -595,26 +545,31 @@ static void axienet_start_xmit_done(struct net_device *ndev)
dma_unmap_single(ndev->dev.parent, cur_p->phys, dma_unmap_single(ndev->dev.parent, cur_p->phys,
(cur_p->cntrl & XAXIDMA_BD_CTRL_LENGTH_MASK), (cur_p->cntrl & XAXIDMA_BD_CTRL_LENGTH_MASK),
DMA_TO_DEVICE); DMA_TO_DEVICE);
if (cur_p->app4) if (cur_p->skb)
dev_consume_skb_irq((struct sk_buff *)cur_p->app4); dev_consume_skb_irq(cur_p->skb);
/*cur_p->phys = 0;*/ /*cur_p->phys = 0;*/
cur_p->app0 = 0; cur_p->app0 = 0;
cur_p->app1 = 0; cur_p->app1 = 0;
cur_p->app2 = 0; cur_p->app2 = 0;
cur_p->app4 = 0; cur_p->app4 = 0;
cur_p->status = 0; cur_p->status = 0;
cur_p->skb = NULL;
size += status & XAXIDMA_BD_STS_ACTUAL_LEN_MASK; size += status & XAXIDMA_BD_STS_ACTUAL_LEN_MASK;
packets++; packets++;
++lp->tx_bd_ci; if (++lp->tx_bd_ci >= lp->tx_bd_num)
lp->tx_bd_ci %= TX_BD_NUM; lp->tx_bd_ci = 0;
cur_p = &lp->tx_bd_v[lp->tx_bd_ci]; cur_p = &lp->tx_bd_v[lp->tx_bd_ci];
status = cur_p->status; status = cur_p->status;
} }
ndev->stats.tx_packets += packets; ndev->stats.tx_packets += packets;
ndev->stats.tx_bytes += size; ndev->stats.tx_bytes += size;
/* Matches barrier in axienet_start_xmit */
smp_mb();
netif_wake_queue(ndev); netif_wake_queue(ndev);
} }
...@@ -635,7 +590,7 @@ static inline int axienet_check_tx_bd_space(struct axienet_local *lp, ...@@ -635,7 +590,7 @@ static inline int axienet_check_tx_bd_space(struct axienet_local *lp,
int num_frag) int num_frag)
{ {
struct axidma_bd *cur_p; struct axidma_bd *cur_p;
cur_p = &lp->tx_bd_v[(lp->tx_bd_tail + num_frag) % TX_BD_NUM]; cur_p = &lp->tx_bd_v[(lp->tx_bd_tail + num_frag) % lp->tx_bd_num];
if (cur_p->status & XAXIDMA_BD_STS_ALL_MASK) if (cur_p->status & XAXIDMA_BD_STS_ALL_MASK)
return NETDEV_TX_BUSY; return NETDEV_TX_BUSY;
return 0; return 0;
...@@ -670,9 +625,19 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev) ...@@ -670,9 +625,19 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
cur_p = &lp->tx_bd_v[lp->tx_bd_tail]; cur_p = &lp->tx_bd_v[lp->tx_bd_tail];
if (axienet_check_tx_bd_space(lp, num_frag)) { if (axienet_check_tx_bd_space(lp, num_frag)) {
if (!netif_queue_stopped(ndev)) if (netif_queue_stopped(ndev))
return NETDEV_TX_BUSY;
netif_stop_queue(ndev); netif_stop_queue(ndev);
/* Matches barrier in axienet_start_xmit_done */
smp_mb();
/* Space might have just been freed - check again */
if (axienet_check_tx_bd_space(lp, num_frag))
return NETDEV_TX_BUSY; return NETDEV_TX_BUSY;
netif_wake_queue(ndev);
} }
if (skb->ip_summed == CHECKSUM_PARTIAL) { if (skb->ip_summed == CHECKSUM_PARTIAL) {
...@@ -695,8 +660,8 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev) ...@@ -695,8 +660,8 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
skb_headlen(skb), DMA_TO_DEVICE); skb_headlen(skb), DMA_TO_DEVICE);
for (ii = 0; ii < num_frag; ii++) { for (ii = 0; ii < num_frag; ii++) {
++lp->tx_bd_tail; if (++lp->tx_bd_tail >= lp->tx_bd_num)
lp->tx_bd_tail %= TX_BD_NUM; lp->tx_bd_tail = 0;
cur_p = &lp->tx_bd_v[lp->tx_bd_tail]; cur_p = &lp->tx_bd_v[lp->tx_bd_tail];
frag = &skb_shinfo(skb)->frags[ii]; frag = &skb_shinfo(skb)->frags[ii];
cur_p->phys = dma_map_single(ndev->dev.parent, cur_p->phys = dma_map_single(ndev->dev.parent,
...@@ -707,13 +672,13 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev) ...@@ -707,13 +672,13 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
} }
cur_p->cntrl |= XAXIDMA_BD_CTRL_TXEOF_MASK; cur_p->cntrl |= XAXIDMA_BD_CTRL_TXEOF_MASK;
cur_p->app4 = (unsigned long)skb; cur_p->skb = skb;
tail_p = lp->tx_bd_p + sizeof(*lp->tx_bd_v) * lp->tx_bd_tail; tail_p = lp->tx_bd_p + sizeof(*lp->tx_bd_v) * lp->tx_bd_tail;
/* Start the transfer */ /* Start the transfer */
axienet_dma_out32(lp, XAXIDMA_TX_TDESC_OFFSET, tail_p); axienet_dma_out32(lp, XAXIDMA_TX_TDESC_OFFSET, tail_p);
++lp->tx_bd_tail; if (++lp->tx_bd_tail >= lp->tx_bd_num)
lp->tx_bd_tail %= TX_BD_NUM; lp->tx_bd_tail = 0;
return NETDEV_TX_OK; return NETDEV_TX_OK;
} }
...@@ -742,13 +707,15 @@ static void axienet_recv(struct net_device *ndev) ...@@ -742,13 +707,15 @@ static void axienet_recv(struct net_device *ndev)
while ((cur_p->status & XAXIDMA_BD_STS_COMPLETE_MASK)) { while ((cur_p->status & XAXIDMA_BD_STS_COMPLETE_MASK)) {
tail_p = lp->rx_bd_p + sizeof(*lp->rx_bd_v) * lp->rx_bd_ci; tail_p = lp->rx_bd_p + sizeof(*lp->rx_bd_v) * lp->rx_bd_ci;
skb = (struct sk_buff *) (cur_p->sw_id_offset);
length = cur_p->app4 & 0x0000FFFF;
dma_unmap_single(ndev->dev.parent, cur_p->phys, dma_unmap_single(ndev->dev.parent, cur_p->phys,
lp->max_frm_size, lp->max_frm_size,
DMA_FROM_DEVICE); DMA_FROM_DEVICE);
skb = cur_p->skb;
cur_p->skb = NULL;
length = cur_p->app4 & 0x0000FFFF;
skb_put(skb, length); skb_put(skb, length);
skb->protocol = eth_type_trans(skb, ndev); skb->protocol = eth_type_trans(skb, ndev);
/*skb_checksum_none_assert(skb);*/ /*skb_checksum_none_assert(skb);*/
...@@ -783,10 +750,10 @@ static void axienet_recv(struct net_device *ndev) ...@@ -783,10 +750,10 @@ static void axienet_recv(struct net_device *ndev)
DMA_FROM_DEVICE); DMA_FROM_DEVICE);
cur_p->cntrl = lp->max_frm_size; cur_p->cntrl = lp->max_frm_size;
cur_p->status = 0; cur_p->status = 0;
cur_p->sw_id_offset = (u32) new_skb; cur_p->skb = new_skb;
++lp->rx_bd_ci; if (++lp->rx_bd_ci >= lp->rx_bd_num)
lp->rx_bd_ci %= RX_BD_NUM; lp->rx_bd_ci = 0;
cur_p = &lp->rx_bd_v[lp->rx_bd_ci]; cur_p = &lp->rx_bd_v[lp->rx_bd_ci];
} }
...@@ -802,7 +769,7 @@ static void axienet_recv(struct net_device *ndev) ...@@ -802,7 +769,7 @@ static void axienet_recv(struct net_device *ndev)
* @irq: irq number * @irq: irq number
* @_ndev: net_device pointer * @_ndev: net_device pointer
* *
* Return: IRQ_HANDLED for all cases. * Return: IRQ_HANDLED if device generated a TX interrupt, IRQ_NONE otherwise.
* *
* This is the Axi DMA Tx done Isr. It invokes "axienet_start_xmit_done" * This is the Axi DMA Tx done Isr. It invokes "axienet_start_xmit_done"
* to complete the BD processing. * to complete the BD processing.
...@@ -821,7 +788,7 @@ static irqreturn_t axienet_tx_irq(int irq, void *_ndev) ...@@ -821,7 +788,7 @@ static irqreturn_t axienet_tx_irq(int irq, void *_ndev)
goto out; goto out;
} }
if (!(status & XAXIDMA_IRQ_ALL_MASK)) if (!(status & XAXIDMA_IRQ_ALL_MASK))
dev_err(&ndev->dev, "No interrupts asserted in Tx path\n"); return IRQ_NONE;
if (status & XAXIDMA_IRQ_ERROR_MASK) { if (status & XAXIDMA_IRQ_ERROR_MASK) {
dev_err(&ndev->dev, "DMA Tx error 0x%x\n", status); dev_err(&ndev->dev, "DMA Tx error 0x%x\n", status);
dev_err(&ndev->dev, "Current BD is at: 0x%x\n", dev_err(&ndev->dev, "Current BD is at: 0x%x\n",
...@@ -851,7 +818,7 @@ static irqreturn_t axienet_tx_irq(int irq, void *_ndev) ...@@ -851,7 +818,7 @@ static irqreturn_t axienet_tx_irq(int irq, void *_ndev)
* @irq: irq number * @irq: irq number
* @_ndev: net_device pointer * @_ndev: net_device pointer
* *
* Return: IRQ_HANDLED for all cases. * Return: IRQ_HANDLED if device generated a RX interrupt, IRQ_NONE otherwise.
* *
* This is the Axi DMA Rx Isr. It invokes "axienet_recv" to complete the BD * This is the Axi DMA Rx Isr. It invokes "axienet_recv" to complete the BD
* processing. * processing.
...@@ -870,7 +837,7 @@ static irqreturn_t axienet_rx_irq(int irq, void *_ndev) ...@@ -870,7 +837,7 @@ static irqreturn_t axienet_rx_irq(int irq, void *_ndev)
goto out; goto out;
} }
if (!(status & XAXIDMA_IRQ_ALL_MASK)) if (!(status & XAXIDMA_IRQ_ALL_MASK))
dev_err(&ndev->dev, "No interrupts asserted in Rx path\n"); return IRQ_NONE;
if (status & XAXIDMA_IRQ_ERROR_MASK) { if (status & XAXIDMA_IRQ_ERROR_MASK) {
dev_err(&ndev->dev, "DMA Rx error 0x%x\n", status); dev_err(&ndev->dev, "DMA Rx error 0x%x\n", status);
dev_err(&ndev->dev, "Current BD is at: 0x%x\n", dev_err(&ndev->dev, "Current BD is at: 0x%x\n",
...@@ -895,6 +862,35 @@ static irqreturn_t axienet_rx_irq(int irq, void *_ndev) ...@@ -895,6 +862,35 @@ static irqreturn_t axienet_rx_irq(int irq, void *_ndev)
return IRQ_HANDLED; return IRQ_HANDLED;
} }
/**
* axienet_eth_irq - Ethernet core Isr.
* @irq: irq number
* @_ndev: net_device pointer
*
* Return: IRQ_HANDLED if device generated a core interrupt, IRQ_NONE otherwise.
*
* Handle miscellaneous conditions indicated by Ethernet core IRQ.
*/
static irqreturn_t axienet_eth_irq(int irq, void *_ndev)
{
struct net_device *ndev = _ndev;
struct axienet_local *lp = netdev_priv(ndev);
unsigned int pending;
pending = axienet_ior(lp, XAE_IP_OFFSET);
if (!pending)
return IRQ_NONE;
if (pending & XAE_INT_RXFIFOOVR_MASK)
ndev->stats.rx_missed_errors++;
if (pending & XAE_INT_RXRJECT_MASK)
ndev->stats.rx_frame_errors++;
axienet_iow(lp, XAE_IS_OFFSET, pending);
return IRQ_HANDLED;
}
static void axienet_dma_err_handler(unsigned long data); static void axienet_dma_err_handler(unsigned long data);
/** /**
...@@ -904,67 +900,72 @@ static void axienet_dma_err_handler(unsigned long data); ...@@ -904,67 +900,72 @@ static void axienet_dma_err_handler(unsigned long data);
* Return: 0, on success. * Return: 0, on success.
* non-zero error value on failure * non-zero error value on failure
* *
* This is the driver open routine. It calls phy_start to start the PHY device. * This is the driver open routine. It calls phylink_start to start the
* PHY device.
* It also allocates interrupt service routines, enables the interrupt lines * It also allocates interrupt service routines, enables the interrupt lines
* and ISR handling. Axi Ethernet core is reset through Axi DMA core. Buffer * and ISR handling. Axi Ethernet core is reset through Axi DMA core. Buffer
* descriptors are initialized. * descriptors are initialized.
*/ */
static int axienet_open(struct net_device *ndev) static int axienet_open(struct net_device *ndev)
{ {
int ret, mdio_mcreg; int ret;
struct axienet_local *lp = netdev_priv(ndev); struct axienet_local *lp = netdev_priv(ndev);
struct phy_device *phydev = NULL;
dev_dbg(&ndev->dev, "axienet_open()\n"); dev_dbg(&ndev->dev, "axienet_open()\n");
mdio_mcreg = axienet_ior(lp, XAE_MDIO_MC_OFFSET);
ret = axienet_mdio_wait_until_ready(lp);
if (ret < 0)
return ret;
/* Disable the MDIO interface till Axi Ethernet Reset is completed. /* Disable the MDIO interface till Axi Ethernet Reset is completed.
* When we do an Axi Ethernet reset, it resets the complete core * When we do an Axi Ethernet reset, it resets the complete core
* including the MDIO. If MDIO is not disabled when the reset * including the MDIO. MDIO must be disabled before resetting
* process is started, MDIO will be broken afterwards. * and re-enabled afterwards.
* Hold MDIO bus lock to avoid MDIO accesses during the reset.
*/ */
axienet_iow(lp, XAE_MDIO_MC_OFFSET, mutex_lock(&lp->mii_bus->mdio_lock);
(mdio_mcreg & (~XAE_MDIO_MC_MDIOEN_MASK))); axienet_mdio_disable(lp);
axienet_device_reset(ndev); axienet_device_reset(ndev);
/* Enable the MDIO */ ret = axienet_mdio_enable(lp);
axienet_iow(lp, XAE_MDIO_MC_OFFSET, mdio_mcreg); mutex_unlock(&lp->mii_bus->mdio_lock);
ret = axienet_mdio_wait_until_ready(lp);
if (ret < 0) if (ret < 0)
return ret; return ret;
if (lp->phy_node) { ret = phylink_of_phy_connect(lp->phylink, lp->dev->of_node, 0);
phydev = of_phy_connect(lp->ndev, lp->phy_node, if (ret) {
axienet_adjust_link, 0, lp->phy_mode); dev_err(lp->dev, "phylink_of_phy_connect() failed: %d\n", ret);
return ret;
if (!phydev)
dev_err(lp->dev, "of_phy_connect() failed\n");
else
phy_start(phydev);
} }
phylink_start(lp->phylink);
/* Enable tasklets for Axi DMA error handling */ /* Enable tasklets for Axi DMA error handling */
tasklet_init(&lp->dma_err_tasklet, axienet_dma_err_handler, tasklet_init(&lp->dma_err_tasklet, axienet_dma_err_handler,
(unsigned long) lp); (unsigned long) lp);
/* Enable interrupts for Axi DMA Tx */ /* Enable interrupts for Axi DMA Tx */
ret = request_irq(lp->tx_irq, axienet_tx_irq, 0, ndev->name, ndev); ret = request_irq(lp->tx_irq, axienet_tx_irq, IRQF_SHARED,
ndev->name, ndev);
if (ret) if (ret)
goto err_tx_irq; goto err_tx_irq;
/* Enable interrupts for Axi DMA Rx */ /* Enable interrupts for Axi DMA Rx */
ret = request_irq(lp->rx_irq, axienet_rx_irq, 0, ndev->name, ndev); ret = request_irq(lp->rx_irq, axienet_rx_irq, IRQF_SHARED,
ndev->name, ndev);
if (ret) if (ret)
goto err_rx_irq; goto err_rx_irq;
/* Enable interrupts for Axi Ethernet core (if defined) */
if (lp->eth_irq > 0) {
ret = request_irq(lp->eth_irq, axienet_eth_irq, IRQF_SHARED,
ndev->name, ndev);
if (ret)
goto err_eth_irq;
}
return 0; return 0;
err_eth_irq:
free_irq(lp->rx_irq, ndev);
err_rx_irq: err_rx_irq:
free_irq(lp->tx_irq, ndev); free_irq(lp->tx_irq, ndev);
err_tx_irq: err_tx_irq:
if (phydev) phylink_stop(lp->phylink);
phy_disconnect(phydev); phylink_disconnect_phy(lp->phylink);
tasklet_kill(&lp->dma_err_tasklet); tasklet_kill(&lp->dma_err_tasklet);
dev_err(lp->dev, "request_irq() failed\n"); dev_err(lp->dev, "request_irq() failed\n");
return ret; return ret;
...@@ -976,34 +977,61 @@ static int axienet_open(struct net_device *ndev) ...@@ -976,34 +977,61 @@ static int axienet_open(struct net_device *ndev)
* *
* Return: 0, on success. * Return: 0, on success.
* *
* This is the driver stop routine. It calls phy_disconnect to stop the PHY * This is the driver stop routine. It calls phylink_disconnect to stop the PHY
* device. It also removes the interrupt handlers and disables the interrupts. * device. It also removes the interrupt handlers and disables the interrupts.
* The Axi DMA Tx/Rx BDs are released. * The Axi DMA Tx/Rx BDs are released.
*/ */
static int axienet_stop(struct net_device *ndev) static int axienet_stop(struct net_device *ndev)
{ {
u32 cr; u32 cr, sr;
int count;
struct axienet_local *lp = netdev_priv(ndev); struct axienet_local *lp = netdev_priv(ndev);
dev_dbg(&ndev->dev, "axienet_close()\n"); dev_dbg(&ndev->dev, "axienet_close()\n");
cr = axienet_dma_in32(lp, XAXIDMA_RX_CR_OFFSET); phylink_stop(lp->phylink);
axienet_dma_out32(lp, XAXIDMA_RX_CR_OFFSET, phylink_disconnect_phy(lp->phylink);
cr & (~XAXIDMA_CR_RUNSTOP_MASK));
cr = axienet_dma_in32(lp, XAXIDMA_TX_CR_OFFSET);
axienet_dma_out32(lp, XAXIDMA_TX_CR_OFFSET,
cr & (~XAXIDMA_CR_RUNSTOP_MASK));
axienet_setoptions(ndev, lp->options & axienet_setoptions(ndev, lp->options &
~(XAE_OPTION_TXEN | XAE_OPTION_RXEN)); ~(XAE_OPTION_TXEN | XAE_OPTION_RXEN));
cr = axienet_dma_in32(lp, XAXIDMA_RX_CR_OFFSET);
cr &= ~(XAXIDMA_CR_RUNSTOP_MASK | XAXIDMA_IRQ_ALL_MASK);
axienet_dma_out32(lp, XAXIDMA_RX_CR_OFFSET, cr);
cr = axienet_dma_in32(lp, XAXIDMA_TX_CR_OFFSET);
cr &= ~(XAXIDMA_CR_RUNSTOP_MASK | XAXIDMA_IRQ_ALL_MASK);
axienet_dma_out32(lp, XAXIDMA_TX_CR_OFFSET, cr);
axienet_iow(lp, XAE_IE_OFFSET, 0);
/* Give DMAs a chance to halt gracefully */
sr = axienet_dma_in32(lp, XAXIDMA_RX_SR_OFFSET);
for (count = 0; !(sr & XAXIDMA_SR_HALT_MASK) && count < 5; ++count) {
msleep(20);
sr = axienet_dma_in32(lp, XAXIDMA_RX_SR_OFFSET);
}
sr = axienet_dma_in32(lp, XAXIDMA_TX_SR_OFFSET);
for (count = 0; !(sr & XAXIDMA_SR_HALT_MASK) && count < 5; ++count) {
msleep(20);
sr = axienet_dma_in32(lp, XAXIDMA_TX_SR_OFFSET);
}
/* Do a reset to ensure DMA is really stopped */
mutex_lock(&lp->mii_bus->mdio_lock);
axienet_mdio_disable(lp);
__axienet_device_reset(lp);
axienet_mdio_enable(lp);
mutex_unlock(&lp->mii_bus->mdio_lock);
tasklet_kill(&lp->dma_err_tasklet); tasklet_kill(&lp->dma_err_tasklet);
if (lp->eth_irq > 0)
free_irq(lp->eth_irq, ndev);
free_irq(lp->tx_irq, ndev); free_irq(lp->tx_irq, ndev);
free_irq(lp->rx_irq, ndev); free_irq(lp->rx_irq, ndev);
if (ndev->phydev)
phy_disconnect(ndev->phydev);
axienet_dma_bd_release(ndev); axienet_dma_bd_release(ndev);
return 0; return 0;
} }
...@@ -1151,6 +1179,48 @@ static void axienet_ethtools_get_regs(struct net_device *ndev, ...@@ -1151,6 +1179,48 @@ static void axienet_ethtools_get_regs(struct net_device *ndev,
data[29] = axienet_ior(lp, XAE_FMI_OFFSET); data[29] = axienet_ior(lp, XAE_FMI_OFFSET);
data[30] = axienet_ior(lp, XAE_AF0_OFFSET); data[30] = axienet_ior(lp, XAE_AF0_OFFSET);
data[31] = axienet_ior(lp, XAE_AF1_OFFSET); data[31] = axienet_ior(lp, XAE_AF1_OFFSET);
data[32] = axienet_dma_in32(lp, XAXIDMA_TX_CR_OFFSET);
data[33] = axienet_dma_in32(lp, XAXIDMA_TX_SR_OFFSET);
data[34] = axienet_dma_in32(lp, XAXIDMA_TX_CDESC_OFFSET);
data[35] = axienet_dma_in32(lp, XAXIDMA_TX_TDESC_OFFSET);
data[36] = axienet_dma_in32(lp, XAXIDMA_RX_CR_OFFSET);
data[37] = axienet_dma_in32(lp, XAXIDMA_RX_SR_OFFSET);
data[38] = axienet_dma_in32(lp, XAXIDMA_RX_CDESC_OFFSET);
data[39] = axienet_dma_in32(lp, XAXIDMA_RX_TDESC_OFFSET);
}
static void axienet_ethtools_get_ringparam(struct net_device *ndev,
struct ethtool_ringparam *ering)
{
struct axienet_local *lp = netdev_priv(ndev);
ering->rx_max_pending = RX_BD_NUM_MAX;
ering->rx_mini_max_pending = 0;
ering->rx_jumbo_max_pending = 0;
ering->tx_max_pending = TX_BD_NUM_MAX;
ering->rx_pending = lp->rx_bd_num;
ering->rx_mini_pending = 0;
ering->rx_jumbo_pending = 0;
ering->tx_pending = lp->tx_bd_num;
}
static int axienet_ethtools_set_ringparam(struct net_device *ndev,
struct ethtool_ringparam *ering)
{
struct axienet_local *lp = netdev_priv(ndev);
if (ering->rx_pending > RX_BD_NUM_MAX ||
ering->rx_mini_pending ||
ering->rx_jumbo_pending ||
ering->rx_pending > TX_BD_NUM_MAX)
return -EINVAL;
if (netif_running(ndev))
return -EBUSY;
lp->rx_bd_num = ering->rx_pending;
lp->tx_bd_num = ering->tx_pending;
return 0;
} }
/** /**
...@@ -1166,12 +1236,9 @@ static void ...@@ -1166,12 +1236,9 @@ static void
axienet_ethtools_get_pauseparam(struct net_device *ndev, axienet_ethtools_get_pauseparam(struct net_device *ndev,
struct ethtool_pauseparam *epauseparm) struct ethtool_pauseparam *epauseparm)
{ {
u32 regval;
struct axienet_local *lp = netdev_priv(ndev); struct axienet_local *lp = netdev_priv(ndev);
epauseparm->autoneg = 0;
regval = axienet_ior(lp, XAE_FCC_OFFSET); phylink_ethtool_get_pauseparam(lp->phylink, epauseparm);
epauseparm->tx_pause = regval & XAE_FCC_FCTX_MASK;
epauseparm->rx_pause = regval & XAE_FCC_FCRX_MASK;
} }
/** /**
...@@ -1190,27 +1257,9 @@ static int ...@@ -1190,27 +1257,9 @@ static int
axienet_ethtools_set_pauseparam(struct net_device *ndev, axienet_ethtools_set_pauseparam(struct net_device *ndev,
struct ethtool_pauseparam *epauseparm) struct ethtool_pauseparam *epauseparm)
{ {
u32 regval = 0;
struct axienet_local *lp = netdev_priv(ndev); struct axienet_local *lp = netdev_priv(ndev);
if (netif_running(ndev)) { return phylink_ethtool_set_pauseparam(lp->phylink, epauseparm);
netdev_err(ndev,
"Please stop netif before applying configuration\n");
return -EFAULT;
}
regval = axienet_ior(lp, XAE_FCC_OFFSET);
if (epauseparm->tx_pause)
regval |= XAE_FCC_FCTX_MASK;
else
regval &= ~XAE_FCC_FCTX_MASK;
if (epauseparm->rx_pause)
regval |= XAE_FCC_FCRX_MASK;
else
regval &= ~XAE_FCC_FCRX_MASK;
axienet_iow(lp, XAE_FCC_OFFSET, regval);
return 0;
} }
/** /**
...@@ -1289,17 +1338,170 @@ static int axienet_ethtools_set_coalesce(struct net_device *ndev, ...@@ -1289,17 +1338,170 @@ static int axienet_ethtools_set_coalesce(struct net_device *ndev,
return 0; return 0;
} }
static int
axienet_ethtools_get_link_ksettings(struct net_device *ndev,
struct ethtool_link_ksettings *cmd)
{
struct axienet_local *lp = netdev_priv(ndev);
return phylink_ethtool_ksettings_get(lp->phylink, cmd);
}
static int
axienet_ethtools_set_link_ksettings(struct net_device *ndev,
const struct ethtool_link_ksettings *cmd)
{
struct axienet_local *lp = netdev_priv(ndev);
return phylink_ethtool_ksettings_set(lp->phylink, cmd);
}
static const struct ethtool_ops axienet_ethtool_ops = { static const struct ethtool_ops axienet_ethtool_ops = {
.get_drvinfo = axienet_ethtools_get_drvinfo, .get_drvinfo = axienet_ethtools_get_drvinfo,
.get_regs_len = axienet_ethtools_get_regs_len, .get_regs_len = axienet_ethtools_get_regs_len,
.get_regs = axienet_ethtools_get_regs, .get_regs = axienet_ethtools_get_regs,
.get_link = ethtool_op_get_link, .get_link = ethtool_op_get_link,
.get_ringparam = axienet_ethtools_get_ringparam,
.set_ringparam = axienet_ethtools_set_ringparam,
.get_pauseparam = axienet_ethtools_get_pauseparam, .get_pauseparam = axienet_ethtools_get_pauseparam,
.set_pauseparam = axienet_ethtools_set_pauseparam, .set_pauseparam = axienet_ethtools_set_pauseparam,
.get_coalesce = axienet_ethtools_get_coalesce, .get_coalesce = axienet_ethtools_get_coalesce,
.set_coalesce = axienet_ethtools_set_coalesce, .set_coalesce = axienet_ethtools_set_coalesce,
.get_link_ksettings = phy_ethtool_get_link_ksettings, .get_link_ksettings = axienet_ethtools_get_link_ksettings,
.set_link_ksettings = phy_ethtool_set_link_ksettings, .set_link_ksettings = axienet_ethtools_set_link_ksettings,
};
static void axienet_validate(struct phylink_config *config,
unsigned long *supported,
struct phylink_link_state *state)
{
struct net_device *ndev = to_net_dev(config->dev);
struct axienet_local *lp = netdev_priv(ndev);
__ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0, };
/* Only support the mode we are configured for */
if (state->interface != PHY_INTERFACE_MODE_NA &&
state->interface != lp->phy_mode) {
netdev_warn(ndev, "Cannot use PHY mode %s, supported: %s\n",
phy_modes(state->interface),
phy_modes(lp->phy_mode));
bitmap_zero(supported, __ETHTOOL_LINK_MODE_MASK_NBITS);
return;
}
phylink_set(mask, Autoneg);
phylink_set_port_modes(mask);
phylink_set(mask, Asym_Pause);
phylink_set(mask, Pause);
phylink_set(mask, 1000baseX_Full);
phylink_set(mask, 10baseT_Full);
phylink_set(mask, 100baseT_Full);
phylink_set(mask, 1000baseT_Full);
bitmap_and(supported, supported, mask,
__ETHTOOL_LINK_MODE_MASK_NBITS);
bitmap_and(state->advertising, state->advertising, mask,
__ETHTOOL_LINK_MODE_MASK_NBITS);
}
static int axienet_mac_link_state(struct phylink_config *config,
struct phylink_link_state *state)
{
struct net_device *ndev = to_net_dev(config->dev);
struct axienet_local *lp = netdev_priv(ndev);
u32 emmc_reg, fcc_reg;
state->interface = lp->phy_mode;
emmc_reg = axienet_ior(lp, XAE_EMMC_OFFSET);
if (emmc_reg & XAE_EMMC_LINKSPD_1000)
state->speed = SPEED_1000;
else if (emmc_reg & XAE_EMMC_LINKSPD_100)
state->speed = SPEED_100;
else
state->speed = SPEED_10;
state->pause = 0;
fcc_reg = axienet_ior(lp, XAE_FCC_OFFSET);
if (fcc_reg & XAE_FCC_FCTX_MASK)
state->pause |= MLO_PAUSE_TX;
if (fcc_reg & XAE_FCC_FCRX_MASK)
state->pause |= MLO_PAUSE_RX;
state->an_complete = 0;
state->duplex = 1;
return 1;
}
static void axienet_mac_an_restart(struct phylink_config *config)
{
/* Unsupported, do nothing */
}
static void axienet_mac_config(struct phylink_config *config, unsigned int mode,
const struct phylink_link_state *state)
{
struct net_device *ndev = to_net_dev(config->dev);
struct axienet_local *lp = netdev_priv(ndev);
u32 emmc_reg, fcc_reg;
emmc_reg = axienet_ior(lp, XAE_EMMC_OFFSET);
emmc_reg &= ~XAE_EMMC_LINKSPEED_MASK;
switch (state->speed) {
case SPEED_1000:
emmc_reg |= XAE_EMMC_LINKSPD_1000;
break;
case SPEED_100:
emmc_reg |= XAE_EMMC_LINKSPD_100;
break;
case SPEED_10:
emmc_reg |= XAE_EMMC_LINKSPD_10;
break;
default:
dev_err(&ndev->dev,
"Speed other than 10, 100 or 1Gbps is not supported\n");
break;
}
axienet_iow(lp, XAE_EMMC_OFFSET, emmc_reg);
fcc_reg = axienet_ior(lp, XAE_FCC_OFFSET);
if (state->pause & MLO_PAUSE_TX)
fcc_reg |= XAE_FCC_FCTX_MASK;
else
fcc_reg &= ~XAE_FCC_FCTX_MASK;
if (state->pause & MLO_PAUSE_RX)
fcc_reg |= XAE_FCC_FCRX_MASK;
else
fcc_reg &= ~XAE_FCC_FCRX_MASK;
axienet_iow(lp, XAE_FCC_OFFSET, fcc_reg);
}
static void axienet_mac_link_down(struct phylink_config *config,
unsigned int mode,
phy_interface_t interface)
{
/* nothing meaningful to do */
}
static void axienet_mac_link_up(struct phylink_config *config,
unsigned int mode,
phy_interface_t interface,
struct phy_device *phy)
{
/* nothing meaningful to do */
}
static const struct phylink_mac_ops axienet_phylink_ops = {
.validate = axienet_validate,
.mac_link_state = axienet_mac_link_state,
.mac_an_restart = axienet_mac_an_restart,
.mac_config = axienet_mac_config,
.mac_link_down = axienet_mac_link_down,
.mac_link_up = axienet_mac_link_up,
}; };
/** /**
...@@ -1313,38 +1515,33 @@ static void axienet_dma_err_handler(unsigned long data) ...@@ -1313,38 +1515,33 @@ static void axienet_dma_err_handler(unsigned long data)
{ {
u32 axienet_status; u32 axienet_status;
u32 cr, i; u32 cr, i;
int mdio_mcreg;
struct axienet_local *lp = (struct axienet_local *) data; struct axienet_local *lp = (struct axienet_local *) data;
struct net_device *ndev = lp->ndev; struct net_device *ndev = lp->ndev;
struct axidma_bd *cur_p; struct axidma_bd *cur_p;
axienet_setoptions(ndev, lp->options & axienet_setoptions(ndev, lp->options &
~(XAE_OPTION_TXEN | XAE_OPTION_RXEN)); ~(XAE_OPTION_TXEN | XAE_OPTION_RXEN));
mdio_mcreg = axienet_ior(lp, XAE_MDIO_MC_OFFSET);
axienet_mdio_wait_until_ready(lp);
/* Disable the MDIO interface till Axi Ethernet Reset is completed. /* Disable the MDIO interface till Axi Ethernet Reset is completed.
* When we do an Axi Ethernet reset, it resets the complete core * When we do an Axi Ethernet reset, it resets the complete core
* including the MDIO. So if MDIO is not disabled when the reset * including the MDIO. MDIO must be disabled before resetting
* process is started, MDIO will be broken afterwards. * and re-enabled afterwards.
* Hold MDIO bus lock to avoid MDIO accesses during the reset.
*/ */
axienet_iow(lp, XAE_MDIO_MC_OFFSET, (mdio_mcreg & mutex_lock(&lp->mii_bus->mdio_lock);
~XAE_MDIO_MC_MDIOEN_MASK)); axienet_mdio_disable(lp);
__axienet_device_reset(lp);
__axienet_device_reset(lp, XAXIDMA_TX_CR_OFFSET); axienet_mdio_enable(lp);
__axienet_device_reset(lp, XAXIDMA_RX_CR_OFFSET); mutex_unlock(&lp->mii_bus->mdio_lock);
axienet_iow(lp, XAE_MDIO_MC_OFFSET, mdio_mcreg);
axienet_mdio_wait_until_ready(lp);
for (i = 0; i < TX_BD_NUM; i++) { for (i = 0; i < lp->tx_bd_num; i++) {
cur_p = &lp->tx_bd_v[i]; cur_p = &lp->tx_bd_v[i];
if (cur_p->phys) if (cur_p->phys)
dma_unmap_single(ndev->dev.parent, cur_p->phys, dma_unmap_single(ndev->dev.parent, cur_p->phys,
(cur_p->cntrl & (cur_p->cntrl &
XAXIDMA_BD_CTRL_LENGTH_MASK), XAXIDMA_BD_CTRL_LENGTH_MASK),
DMA_TO_DEVICE); DMA_TO_DEVICE);
if (cur_p->app4) if (cur_p->skb)
dev_kfree_skb_irq((struct sk_buff *) cur_p->app4); dev_kfree_skb_irq(cur_p->skb);
cur_p->phys = 0; cur_p->phys = 0;
cur_p->cntrl = 0; cur_p->cntrl = 0;
cur_p->status = 0; cur_p->status = 0;
...@@ -1353,10 +1550,10 @@ static void axienet_dma_err_handler(unsigned long data) ...@@ -1353,10 +1550,10 @@ static void axienet_dma_err_handler(unsigned long data)
cur_p->app2 = 0; cur_p->app2 = 0;
cur_p->app3 = 0; cur_p->app3 = 0;
cur_p->app4 = 0; cur_p->app4 = 0;
cur_p->sw_id_offset = 0; cur_p->skb = NULL;
} }
for (i = 0; i < RX_BD_NUM; i++) { for (i = 0; i < lp->rx_bd_num; i++) {
cur_p = &lp->rx_bd_v[i]; cur_p = &lp->rx_bd_v[i];
cur_p->status = 0; cur_p->status = 0;
cur_p->app0 = 0; cur_p->app0 = 0;
...@@ -1404,7 +1601,7 @@ static void axienet_dma_err_handler(unsigned long data) ...@@ -1404,7 +1601,7 @@ static void axienet_dma_err_handler(unsigned long data)
axienet_dma_out32(lp, XAXIDMA_RX_CR_OFFSET, axienet_dma_out32(lp, XAXIDMA_RX_CR_OFFSET,
cr | XAXIDMA_CR_RUNSTOP_MASK); cr | XAXIDMA_CR_RUNSTOP_MASK);
axienet_dma_out32(lp, XAXIDMA_RX_TDESC_OFFSET, lp->rx_bd_p + axienet_dma_out32(lp, XAXIDMA_RX_TDESC_OFFSET, lp->rx_bd_p +
(sizeof(*lp->rx_bd_v) * (RX_BD_NUM - 1))); (sizeof(*lp->rx_bd_v) * (lp->rx_bd_num - 1)));
/* Write to the RS (Run-stop) bit in the Tx channel control register. /* Write to the RS (Run-stop) bit in the Tx channel control register.
* Tx channel is now ready to run. But only after we write to the * Tx channel is now ready to run. But only after we write to the
...@@ -1422,6 +1619,8 @@ static void axienet_dma_err_handler(unsigned long data) ...@@ -1422,6 +1619,8 @@ static void axienet_dma_err_handler(unsigned long data)
axienet_status = axienet_ior(lp, XAE_IP_OFFSET); axienet_status = axienet_ior(lp, XAE_IP_OFFSET);
if (axienet_status & XAE_INT_RXRJECT_MASK) if (axienet_status & XAE_INT_RXRJECT_MASK)
axienet_iow(lp, XAE_IS_OFFSET, XAE_INT_RXRJECT_MASK); axienet_iow(lp, XAE_IS_OFFSET, XAE_INT_RXRJECT_MASK);
axienet_iow(lp, XAE_IE_OFFSET, lp->eth_irq > 0 ?
XAE_INT_RECV_ERROR_MASK : 0);
axienet_iow(lp, XAE_FCC_OFFSET, XAE_FCC_FCRX_MASK); axienet_iow(lp, XAE_FCC_OFFSET, XAE_FCC_FCRX_MASK);
/* Sync default options with HW but leave receiver and /* Sync default options with HW but leave receiver and
...@@ -1453,7 +1652,7 @@ static int axienet_probe(struct platform_device *pdev) ...@@ -1453,7 +1652,7 @@ static int axienet_probe(struct platform_device *pdev)
struct axienet_local *lp; struct axienet_local *lp;
struct net_device *ndev; struct net_device *ndev;
const void *mac_addr; const void *mac_addr;
struct resource *ethres, dmares; struct resource *ethres;
u32 value; u32 value;
ndev = alloc_etherdev(sizeof(*lp)); ndev = alloc_etherdev(sizeof(*lp));
...@@ -1476,8 +1675,11 @@ static int axienet_probe(struct platform_device *pdev) ...@@ -1476,8 +1675,11 @@ static int axienet_probe(struct platform_device *pdev)
lp->ndev = ndev; lp->ndev = ndev;
lp->dev = &pdev->dev; lp->dev = &pdev->dev;
lp->options = XAE_OPTION_DEFAULTS; lp->options = XAE_OPTION_DEFAULTS;
lp->rx_bd_num = RX_BD_NUM_DEFAULT;
lp->tx_bd_num = TX_BD_NUM_DEFAULT;
/* Map device registers */ /* Map device registers */
ethres = platform_get_resource(pdev, IORESOURCE_MEM, 0); ethres = platform_get_resource(pdev, IORESOURCE_MEM, 0);
lp->regs_start = ethres->start;
lp->regs = devm_ioremap_resource(&pdev->dev, ethres); lp->regs = devm_ioremap_resource(&pdev->dev, ethres);
if (IS_ERR(lp->regs)) { if (IS_ERR(lp->regs)) {
dev_err(&pdev->dev, "could not map Axi Ethernet regs.\n"); dev_err(&pdev->dev, "could not map Axi Ethernet regs.\n");
...@@ -1568,38 +1770,57 @@ static int axienet_probe(struct platform_device *pdev) ...@@ -1568,38 +1770,57 @@ static int axienet_probe(struct platform_device *pdev)
/* Find the DMA node, map the DMA registers, and decode the DMA IRQs */ /* Find the DMA node, map the DMA registers, and decode the DMA IRQs */
np = of_parse_phandle(pdev->dev.of_node, "axistream-connected", 0); np = of_parse_phandle(pdev->dev.of_node, "axistream-connected", 0);
if (!np) { if (np) {
dev_err(&pdev->dev, "could not find DMA node\n"); struct resource dmares;
ret = -ENODEV;
goto free_netdev;
}
ret = of_address_to_resource(np, 0, &dmares); ret = of_address_to_resource(np, 0, &dmares);
if (ret) { if (ret) {
dev_err(&pdev->dev, "unable to get DMA resource\n"); dev_err(&pdev->dev,
"unable to get DMA resource\n");
of_node_put(np); of_node_put(np);
goto free_netdev; goto free_netdev;
} }
lp->dma_regs = devm_ioremap_resource(&pdev->dev, &dmares); lp->dma_regs = devm_ioremap_resource(&pdev->dev,
&dmares);
lp->rx_irq = irq_of_parse_and_map(np, 1);
lp->tx_irq = irq_of_parse_and_map(np, 0);
of_node_put(np);
lp->eth_irq = platform_get_irq(pdev, 0);
} else {
/* Check for these resources directly on the Ethernet node. */
struct resource *res = platform_get_resource(pdev,
IORESOURCE_MEM, 1);
if (!res) {
dev_err(&pdev->dev, "unable to get DMA memory resource\n");
goto free_netdev;
}
lp->dma_regs = devm_ioremap_resource(&pdev->dev, res);
lp->rx_irq = platform_get_irq(pdev, 1);
lp->tx_irq = platform_get_irq(pdev, 0);
lp->eth_irq = platform_get_irq(pdev, 2);
}
if (IS_ERR(lp->dma_regs)) { if (IS_ERR(lp->dma_regs)) {
dev_err(&pdev->dev, "could not map DMA regs\n"); dev_err(&pdev->dev, "could not map DMA regs\n");
ret = PTR_ERR(lp->dma_regs); ret = PTR_ERR(lp->dma_regs);
of_node_put(np); of_node_put(np);
goto free_netdev; goto free_netdev;
} }
lp->rx_irq = irq_of_parse_and_map(np, 1);
lp->tx_irq = irq_of_parse_and_map(np, 0);
of_node_put(np);
if ((lp->rx_irq <= 0) || (lp->tx_irq <= 0)) { if ((lp->rx_irq <= 0) || (lp->tx_irq <= 0)) {
dev_err(&pdev->dev, "could not determine irqs\n"); dev_err(&pdev->dev, "could not determine irqs\n");
ret = -ENOMEM; ret = -ENOMEM;
goto free_netdev; goto free_netdev;
} }
/* Check for Ethernet core IRQ (optional) */
if (lp->eth_irq <= 0)
dev_info(&pdev->dev, "Ethernet core IRQ not defined\n");
/* Retrieve the MAC address */ /* Retrieve the MAC address */
mac_addr = of_get_mac_address(pdev->dev.of_node); mac_addr = of_get_mac_address(pdev->dev.of_node);
if (IS_ERR(mac_addr)) { if (IS_ERR(mac_addr)) {
dev_err(&pdev->dev, "could not find MAC address\n"); dev_warn(&pdev->dev, "could not find MAC address property: %ld\n",
goto free_netdev; PTR_ERR(mac_addr));
mac_addr = NULL;
} }
axienet_set_mac_address(ndev, mac_addr); axienet_set_mac_address(ndev, mac_addr);
...@@ -1608,9 +1829,36 @@ static int axienet_probe(struct platform_device *pdev) ...@@ -1608,9 +1829,36 @@ static int axienet_probe(struct platform_device *pdev)
lp->phy_node = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0); lp->phy_node = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0);
if (lp->phy_node) { if (lp->phy_node) {
ret = axienet_mdio_setup(lp, pdev->dev.of_node); lp->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(lp->clk)) {
dev_warn(&pdev->dev, "Failed to get clock: %ld\n",
PTR_ERR(lp->clk));
lp->clk = NULL;
} else {
ret = clk_prepare_enable(lp->clk);
if (ret) {
dev_err(&pdev->dev, "Unable to enable clock: %d\n",
ret);
goto free_netdev;
}
}
ret = axienet_mdio_setup(lp);
if (ret) if (ret)
dev_warn(&pdev->dev, "error registering MDIO bus\n"); dev_warn(&pdev->dev,
"error registering MDIO bus: %d\n", ret);
}
lp->phylink_config.dev = &ndev->dev;
lp->phylink_config.type = PHYLINK_NETDEV;
lp->phylink = phylink_create(&lp->phylink_config, pdev->dev.fwnode,
lp->phy_mode,
&axienet_phylink_ops);
if (IS_ERR(lp->phylink)) {
ret = PTR_ERR(lp->phylink);
dev_err(&pdev->dev, "phylink_create error (%i)\n", ret);
goto free_netdev;
} }
ret = register_netdev(lp->ndev); ret = register_netdev(lp->ndev);
...@@ -1632,9 +1880,16 @@ static int axienet_remove(struct platform_device *pdev) ...@@ -1632,9 +1880,16 @@ static int axienet_remove(struct platform_device *pdev)
struct net_device *ndev = platform_get_drvdata(pdev); struct net_device *ndev = platform_get_drvdata(pdev);
struct axienet_local *lp = netdev_priv(ndev); struct axienet_local *lp = netdev_priv(ndev);
axienet_mdio_teardown(lp);
unregister_netdev(ndev); unregister_netdev(ndev);
if (lp->phylink)
phylink_destroy(lp->phylink);
axienet_mdio_teardown(lp);
if (lp->clk)
clk_disable_unprepare(lp->clk);
of_node_put(lp->phy_node); of_node_put(lp->phy_node);
lp->phy_node = NULL; lp->phy_node = NULL;
...@@ -1643,9 +1898,23 @@ static int axienet_remove(struct platform_device *pdev) ...@@ -1643,9 +1898,23 @@ static int axienet_remove(struct platform_device *pdev)
return 0; return 0;
} }
static void axienet_shutdown(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
rtnl_lock();
netif_device_detach(ndev);
if (netif_running(ndev))
dev_close(ndev);
rtnl_unlock();
}
static struct platform_driver axienet_driver = { static struct platform_driver axienet_driver = {
.probe = axienet_probe, .probe = axienet_probe,
.remove = axienet_remove, .remove = axienet_remove,
.shutdown = axienet_shutdown,
.driver = { .driver = {
.name = "xilinx_axienet", .name = "xilinx_axienet",
.of_match_table = axienet_of_match, .of_match_table = axienet_of_match,
......
...@@ -5,9 +5,11 @@ ...@@ -5,9 +5,11 @@
* Copyright (c) 2009 Secret Lab Technologies, Ltd. * Copyright (c) 2009 Secret Lab Technologies, Ltd.
* Copyright (c) 2010 - 2011 Michal Simek <monstr@monstr.eu> * Copyright (c) 2010 - 2011 Michal Simek <monstr@monstr.eu>
* Copyright (c) 2010 - 2011 PetaLogix * Copyright (c) 2010 - 2011 PetaLogix
* Copyright (c) 2019 SED Systems, a division of Calian Ltd.
* Copyright (c) 2010 - 2012 Xilinx, Inc. All rights reserved. * Copyright (c) 2010 - 2012 Xilinx, Inc. All rights reserved.
*/ */
#include <linux/clk.h>
#include <linux/of_address.h> #include <linux/of_address.h>
#include <linux/of_mdio.h> #include <linux/of_mdio.h>
#include <linux/jiffies.h> #include <linux/jiffies.h>
...@@ -16,10 +18,10 @@ ...@@ -16,10 +18,10 @@
#include "xilinx_axienet.h" #include "xilinx_axienet.h"
#define MAX_MDIO_FREQ 2500000 /* 2.5 MHz */ #define MAX_MDIO_FREQ 2500000 /* 2.5 MHz */
#define DEFAULT_CLOCK_DIVISOR XAE_MDIO_DIV_DFT #define DEFAULT_HOST_CLOCK 150000000 /* 150 MHz */
/* Wait till MDIO interface is ready to accept a new transaction.*/ /* Wait till MDIO interface is ready to accept a new transaction.*/
int axienet_mdio_wait_until_ready(struct axienet_local *lp) static int axienet_mdio_wait_until_ready(struct axienet_local *lp)
{ {
u32 val; u32 val;
...@@ -112,24 +114,43 @@ static int axienet_mdio_write(struct mii_bus *bus, int phy_id, int reg, ...@@ -112,24 +114,43 @@ static int axienet_mdio_write(struct mii_bus *bus, int phy_id, int reg,
} }
/** /**
* axienet_mdio_setup - MDIO setup function * axienet_mdio_enable - MDIO hardware setup function
* @lp: Pointer to axienet local data structure. * @lp: Pointer to axienet local data structure.
* @np: Pointer to device node
* *
* Return: 0 on success, -ETIMEDOUT on a timeout, -ENOMEM when * Return: 0 on success, -ETIMEDOUT on a timeout.
* mdiobus_alloc (to allocate memory for mii bus structure) fails.
* *
* Sets up the MDIO interface by initializing the MDIO clock and enabling the * Sets up the MDIO interface by initializing the MDIO clock and enabling the
* MDIO interface in hardware. Register the MDIO interface. * MDIO interface in hardware.
**/ **/
int axienet_mdio_setup(struct axienet_local *lp, struct device_node *np) int axienet_mdio_enable(struct axienet_local *lp)
{ {
int ret;
u32 clk_div, host_clock; u32 clk_div, host_clock;
struct mii_bus *bus;
struct resource res; if (lp->clk) {
host_clock = clk_get_rate(lp->clk);
} else {
struct device_node *np1; struct device_node *np1;
/* Legacy fallback: detect CPU clock frequency and use as AXI
* bus clock frequency. This only works on certain platforms.
*/
np1 = of_find_node_by_name(NULL, "cpu");
if (!np1) {
netdev_warn(lp->ndev, "Could not find CPU device node.\n");
host_clock = DEFAULT_HOST_CLOCK;
} else {
int ret = of_property_read_u32(np1, "clock-frequency",
&host_clock);
if (ret) {
netdev_warn(lp->ndev, "CPU clock-frequency property not found.\n");
host_clock = DEFAULT_HOST_CLOCK;
}
of_node_put(np1);
}
netdev_info(lp->ndev, "Setting assumed host clock to %u\n",
host_clock);
}
/* clk_div can be calculated by deriving it from the equation: /* clk_div can be calculated by deriving it from the equation:
* fMDIO = fHOST / ((1 + clk_div) * 2) * fMDIO = fHOST / ((1 + clk_div) * 2)
* *
...@@ -155,25 +176,6 @@ int axienet_mdio_setup(struct axienet_local *lp, struct device_node *np) ...@@ -155,25 +176,6 @@ int axienet_mdio_setup(struct axienet_local *lp, struct device_node *np)
* "clock-frequency" from the CPU * "clock-frequency" from the CPU
*/ */
np1 = of_find_node_by_name(NULL, "cpu");
if (!np1) {
netdev_warn(lp->ndev, "Could not find CPU device node.\n");
netdev_warn(lp->ndev,
"Setting MDIO clock divisor to default %d\n",
DEFAULT_CLOCK_DIVISOR);
clk_div = DEFAULT_CLOCK_DIVISOR;
goto issue;
}
if (of_property_read_u32(np1, "clock-frequency", &host_clock)) {
netdev_warn(lp->ndev, "clock-frequency property not found.\n");
netdev_warn(lp->ndev,
"Setting MDIO clock divisor to default %d\n",
DEFAULT_CLOCK_DIVISOR);
clk_div = DEFAULT_CLOCK_DIVISOR;
of_node_put(np1);
goto issue;
}
clk_div = (host_clock / (MAX_MDIO_FREQ * 2)) - 1; clk_div = (host_clock / (MAX_MDIO_FREQ * 2)) - 1;
/* If there is any remainder from the division of /* If there is any remainder from the division of
* fHOST / (MAX_MDIO_FREQ * 2), then we need to add * fHOST / (MAX_MDIO_FREQ * 2), then we need to add
...@@ -186,12 +188,39 @@ int axienet_mdio_setup(struct axienet_local *lp, struct device_node *np) ...@@ -186,12 +188,39 @@ int axienet_mdio_setup(struct axienet_local *lp, struct device_node *np)
"Setting MDIO clock divisor to %u/%u Hz host clock.\n", "Setting MDIO clock divisor to %u/%u Hz host clock.\n",
clk_div, host_clock); clk_div, host_clock);
of_node_put(np1); axienet_iow(lp, XAE_MDIO_MC_OFFSET, clk_div | XAE_MDIO_MC_MDIOEN_MASK);
issue:
axienet_iow(lp, XAE_MDIO_MC_OFFSET,
(((u32) clk_div) | XAE_MDIO_MC_MDIOEN_MASK));
ret = axienet_mdio_wait_until_ready(lp); return axienet_mdio_wait_until_ready(lp);
}
/**
* axienet_mdio_disable - MDIO hardware disable function
* @lp: Pointer to axienet local data structure.
*
* Disable the MDIO interface in hardware.
**/
void axienet_mdio_disable(struct axienet_local *lp)
{
axienet_iow(lp, XAE_MDIO_MC_OFFSET, 0);
}
/**
* axienet_mdio_setup - MDIO setup function
* @lp: Pointer to axienet local data structure.
*
* Return: 0 on success, -ETIMEDOUT on a timeout, -ENOMEM when
* mdiobus_alloc (to allocate memory for mii bus structure) fails.
*
* Sets up the MDIO interface by initializing the MDIO clock and enabling the
* MDIO interface in hardware. Register the MDIO interface.
**/
int axienet_mdio_setup(struct axienet_local *lp)
{
struct device_node *mdio_node;
struct mii_bus *bus;
int ret;
ret = axienet_mdio_enable(lp);
if (ret < 0) if (ret < 0)
return ret; return ret;
...@@ -199,10 +228,8 @@ int axienet_mdio_setup(struct axienet_local *lp, struct device_node *np) ...@@ -199,10 +228,8 @@ int axienet_mdio_setup(struct axienet_local *lp, struct device_node *np)
if (!bus) if (!bus)
return -ENOMEM; return -ENOMEM;
np1 = of_get_parent(lp->phy_node); snprintf(bus->id, MII_BUS_ID_SIZE, "axienet-%.8llx",
of_address_to_resource(np1, 0, &res); (unsigned long long)lp->regs_start);
snprintf(bus->id, MII_BUS_ID_SIZE, "%.8llx",
(unsigned long long) res.start);
bus->priv = lp; bus->priv = lp;
bus->name = "Xilinx Axi Ethernet MDIO"; bus->name = "Xilinx Axi Ethernet MDIO";
...@@ -211,7 +238,9 @@ int axienet_mdio_setup(struct axienet_local *lp, struct device_node *np) ...@@ -211,7 +238,9 @@ int axienet_mdio_setup(struct axienet_local *lp, struct device_node *np)
bus->parent = lp->dev; bus->parent = lp->dev;
lp->mii_bus = bus; lp->mii_bus = bus;
ret = of_mdiobus_register(bus, np1); mdio_node = of_get_child_by_name(lp->dev->of_node, "mdio");
ret = of_mdiobus_register(bus, mdio_node);
of_node_put(mdio_node);
if (ret) { if (ret) {
mdiobus_free(bus); mdiobus_free(bus);
lp->mii_bus = NULL; lp->mii_bus = NULL;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment