Commit f05643a0 authored by Jakub Kicinski's avatar Jakub Kicinski

eth: remove neterion/vxge

The last meaningful change to this driver was made by Jon in 2011.
As much as we'd like to believe that this is because the code is
perfect the chances are nobody is using this hardware.

Because of the size of this driver there is a nontrivial maintenance
cost to keeping this code around, in the last 2 years we're averaging
more than 1 change a month. Some of which require nontrivial review
effort, see commit 877fe9d4 ("Revert "drivers/net/ethernet/neterion/vxge:
Fix a use-after-free bug in vxge-main.c"") for example.

Let's try to remove this driver. In general, IMHO, we need to
establish a clear path for shedding dead code. It will be hard
to unless we have some experience trying to delete stuff.

Link: https://lore.kernel.org/r/20220701044234.706229-1-kuba@kernel.orgSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parent 3359619a
...@@ -42,7 +42,6 @@ Contents: ...@@ -42,7 +42,6 @@ Contents:
mellanox/mlx5 mellanox/mlx5
microsoft/netvsc microsoft/netvsc
neterion/s2io neterion/s2io
neterion/vxge
netronome/nfp netronome/nfp
pensando/ionic pensando/ionic
smsc/smc9 smsc/smc9
......
.. SPDX-License-Identifier: GPL-2.0
==============================================================================
Neterion's (Formerly S2io) X3100 Series 10GbE PCIe Server Adapter Linux driver
==============================================================================
.. Contents
1) Introduction
2) Features supported
3) Configurable driver parameters
4) Troubleshooting
1. Introduction
===============
This Linux driver supports all Neterion's X3100 series 10 GbE PCIe I/O
Virtualized Server adapters.
The X3100 series supports four modes of operation, configurable via
firmware:
- Single function mode
- Multi function mode
- SRIOV mode
- MRIOV mode
The functions share a 10GbE link and the pci-e bus, but hardly anything else
inside the ASIC. Features like independent hw reset, statistics, bandwidth/
priority allocation and guarantees, GRO, TSO, interrupt moderation etc are
supported independently on each function.
(See below for a complete list of features supported for both IPv4 and IPv6)
2. Features supported
=====================
i) Single function mode (up to 17 queues)
ii) Multi function mode (up to 17 functions)
iii) PCI-SIG's I/O Virtualization
- Single Root mode: v1.0 (up to 17 functions)
- Multi-Root mode: v1.0 (up to 17 functions)
iv) Jumbo frames
X3100 Series supports MTU up to 9600 bytes, modifiable using
ip command.
v) Offloads supported: (Enabled by default)
- Checksum offload (TCP/UDP/IP) on transmit and receive paths
- TCP Segmentation Offload (TSO) on transmit path
- Generic Receive Offload (GRO) on receive path
vi) MSI-X: (Enabled by default)
Resulting in noticeable performance improvement (up to 7% on certain
platforms).
vii) NAPI: (Enabled by default)
For better Rx interrupt moderation.
viii)RTH (Receive Traffic Hash): (Enabled by default)
Receive side steering for better scaling.
ix) Statistics
Comprehensive MAC-level and software statistics displayed using
"ethtool -S" option.
x) Multiple hardware queues: (Enabled by default)
Up to 17 hardware based transmit and receive data channels, with
multiple steering options (transmit multiqueue enabled by default).
3) Configurable driver parameters:
----------------------------------
i) max_config_dev
Specifies maximum device functions to be enabled.
Valid range: 1-8
ii) max_config_port
Specifies number of ports to be enabled.
Valid range: 1,2
Default: 1
iii) max_config_vpath
Specifies maximum VPATH(s) configured for each device function.
Valid range: 1-17
iv) vlan_tag_strip
Enables/disables vlan tag stripping from all received tagged frames that
are not replicated at the internal L2 switch.
Valid range: 0,1 (disabled, enabled respectively)
Default: 1
v) addr_learn_en
Enable learning the mac address of the guest OS interface in
virtualization environment.
Valid range: 0,1 (disabled, enabled respectively)
Default: 0
...@@ -13775,12 +13775,11 @@ L: netdev@vger.kernel.org ...@@ -13775,12 +13775,11 @@ L: netdev@vger.kernel.org
S: Maintained S: Maintained
F: net/sched/sch_netem.c F: net/sched/sch_netem.c
NETERION 10GbE DRIVERS (s2io/vxge) NETERION 10GbE DRIVERS (s2io)
M: Jon Mason <jdmason@kudzu.us> M: Jon Mason <jdmason@kudzu.us>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Supported S: Supported
F: Documentation/networking/device_drivers/ethernet/neterion/s2io.rst F: Documentation/networking/device_drivers/ethernet/neterion/s2io.rst
F: Documentation/networking/device_drivers/ethernet/neterion/vxge.rst
F: drivers/net/ethernet/neterion/ F: drivers/net/ethernet/neterion/
NETFILTER NETFILTER
......
...@@ -32,28 +32,4 @@ config S2IO ...@@ -32,28 +32,4 @@ config S2IO
To compile this driver as a module, choose M here. The module To compile this driver as a module, choose M here. The module
will be called s2io. will be called s2io.
config VXGE
tristate "Neterion (Exar) X3100 Series 10GbE PCIe Server Adapter"
depends on PCI
help
This driver supports Exar Corp's X3100 Series 10 GbE PCIe
I/O Virtualized Server Adapter. These were originally released from
Neterion, which was later acquired by Exar. So, the adapters might be
labeled as either one, depending on its age.
More specific information on configuring the driver is in
<file:Documentation/networking/device_drivers/ethernet/neterion/vxge.rst>.
To compile this driver as a module, choose M here. The module
will be called vxge.
config VXGE_DEBUG_TRACE_ALL
bool "Enabling All Debug trace statements in driver"
default n
depends on VXGE
help
Say Y here if you want to enabling all the debug trace statements in
the vxge driver. By default only few debug trace statements are
enabled.
endif # NET_VENDOR_NETERION endif # NET_VENDOR_NETERION
...@@ -4,4 +4,3 @@ ...@@ -4,4 +4,3 @@
# #
obj-$(CONFIG_S2IO) += s2io.o obj-$(CONFIG_S2IO) += s2io.o
obj-$(CONFIG_VXGE) += vxge/
# SPDX-License-Identifier: GPL-2.0-only
#
# Makefile for Exar Corp's X3100 Series 10 GbE PCIe I/O
# Virtualized Server Adapter linux driver
obj-$(CONFIG_VXGE) += vxge.o
vxge-objs := vxge-config.o vxge-traffic.o vxge-ethtool.o vxge-main.o
This source diff could not be displayed because it is too large. You can view the blob instead.
/******************************************************************************
* This software may be used and distributed according to the terms of
* the GNU General Public License (GPL), incorporated herein by reference.
* Drivers based on or derived from this code fall under the GPL and must
* retain the authorship, copyright and license notice. This file is not
* a complete program and may only be used when the entire operating
* system is licensed under the GPL.
* See the file COPYING in this distribution for more information.
*
* vxge-config.h: Driver for Exar Corp's X3100 Series 10GbE PCIe I/O
* Virtualized Server Adapter.
* Copyright(c) 2002-2010 Exar Corp.
******************************************************************************/
#ifndef VXGE_CONFIG_H
#define VXGE_CONFIG_H
#include <linux/hardirq.h>
#include <linux/list.h>
#include <linux/slab.h>
#include <asm/io.h>
#ifndef VXGE_CACHE_LINE_SIZE
#define VXGE_CACHE_LINE_SIZE 128
#endif
#ifndef VXGE_ALIGN
#define VXGE_ALIGN(adrs, size) \
(((size) - (((u64)adrs) & ((size)-1))) & ((size)-1))
#endif
#define VXGE_HW_MIN_MTU ETH_MIN_MTU
#define VXGE_HW_MAX_MTU 9600
#define VXGE_HW_DEFAULT_MTU 1500
#define VXGE_HW_MAX_ROM_IMAGES 8
struct eprom_image {
u8 is_valid:1;
u8 index;
u8 type;
u16 version;
};
#ifdef VXGE_DEBUG_ASSERT
/**
* vxge_assert
* @test: C-condition to check
* @fmt: printf like format string
*
* This function implements traditional assert. By default assertions
* are enabled. It can be disabled by undefining VXGE_DEBUG_ASSERT macro in
* compilation
* time.
*/
#define vxge_assert(test) BUG_ON(!(test))
#else
#define vxge_assert(test)
#endif /* end of VXGE_DEBUG_ASSERT */
/**
* enum vxge_debug_level
* @VXGE_NONE: debug disabled
* @VXGE_ERR: all errors going to be logged out
* @VXGE_TRACE: all errors plus all kind of verbose tracing print outs
* going to be logged out. Very noisy.
*
* This enumeration going to be used to switch between different
* debug levels during runtime if DEBUG macro defined during
* compilation. If DEBUG macro not defined than code will be
* compiled out.
*/
enum vxge_debug_level {
VXGE_NONE = 0,
VXGE_TRACE = 1,
VXGE_ERR = 2
};
#define NULL_VPID 0xFFFFFFFF
#ifdef CONFIG_VXGE_DEBUG_TRACE_ALL
#define VXGE_DEBUG_MODULE_MASK 0xffffffff
#define VXGE_DEBUG_TRACE_MASK 0xffffffff
#define VXGE_DEBUG_ERR_MASK 0xffffffff
#define VXGE_DEBUG_MASK 0x000001ff
#else
#define VXGE_DEBUG_MODULE_MASK 0x20000000
#define VXGE_DEBUG_TRACE_MASK 0x20000000
#define VXGE_DEBUG_ERR_MASK 0x20000000
#define VXGE_DEBUG_MASK 0x00000001
#endif
/*
* @VXGE_COMPONENT_LL: do debug for vxge link layer module
* @VXGE_COMPONENT_ALL: activate debug for all modules with no exceptions
*
* This enumeration going to be used to distinguish modules
* or libraries during compilation and runtime. Makefile must declare
* VXGE_DEBUG_MODULE_MASK macro and set it to proper value.
*/
#define VXGE_COMPONENT_LL 0x20000000
#define VXGE_COMPONENT_ALL 0xffffffff
#define VXGE_HW_BASE_INF 100
#define VXGE_HW_BASE_ERR 200
#define VXGE_HW_BASE_BADCFG 300
enum vxge_hw_status {
VXGE_HW_OK = 0,
VXGE_HW_FAIL = 1,
VXGE_HW_PENDING = 2,
VXGE_HW_COMPLETIONS_REMAIN = 3,
VXGE_HW_INF_NO_MORE_COMPLETED_DESCRIPTORS = VXGE_HW_BASE_INF + 1,
VXGE_HW_INF_OUT_OF_DESCRIPTORS = VXGE_HW_BASE_INF + 2,
VXGE_HW_ERR_INVALID_HANDLE = VXGE_HW_BASE_ERR + 1,
VXGE_HW_ERR_OUT_OF_MEMORY = VXGE_HW_BASE_ERR + 2,
VXGE_HW_ERR_VPATH_NOT_AVAILABLE = VXGE_HW_BASE_ERR + 3,
VXGE_HW_ERR_VPATH_NOT_OPEN = VXGE_HW_BASE_ERR + 4,
VXGE_HW_ERR_WRONG_IRQ = VXGE_HW_BASE_ERR + 5,
VXGE_HW_ERR_SWAPPER_CTRL = VXGE_HW_BASE_ERR + 6,
VXGE_HW_ERR_INVALID_MTU_SIZE = VXGE_HW_BASE_ERR + 7,
VXGE_HW_ERR_INVALID_INDEX = VXGE_HW_BASE_ERR + 8,
VXGE_HW_ERR_INVALID_TYPE = VXGE_HW_BASE_ERR + 9,
VXGE_HW_ERR_INVALID_OFFSET = VXGE_HW_BASE_ERR + 10,
VXGE_HW_ERR_INVALID_DEVICE = VXGE_HW_BASE_ERR + 11,
VXGE_HW_ERR_VERSION_CONFLICT = VXGE_HW_BASE_ERR + 12,
VXGE_HW_ERR_INVALID_PCI_INFO = VXGE_HW_BASE_ERR + 13,
VXGE_HW_ERR_INVALID_TCODE = VXGE_HW_BASE_ERR + 14,
VXGE_HW_ERR_INVALID_BLOCK_SIZE = VXGE_HW_BASE_ERR + 15,
VXGE_HW_ERR_INVALID_STATE = VXGE_HW_BASE_ERR + 16,
VXGE_HW_ERR_PRIVILEGED_OPERATION = VXGE_HW_BASE_ERR + 17,
VXGE_HW_ERR_INVALID_PORT = VXGE_HW_BASE_ERR + 18,
VXGE_HW_ERR_FIFO = VXGE_HW_BASE_ERR + 19,
VXGE_HW_ERR_VPATH = VXGE_HW_BASE_ERR + 20,
VXGE_HW_ERR_CRITICAL = VXGE_HW_BASE_ERR + 21,
VXGE_HW_ERR_SLOT_FREEZE = VXGE_HW_BASE_ERR + 22,
VXGE_HW_BADCFG_RING_INDICATE_MAX_PKTS = VXGE_HW_BASE_BADCFG + 1,
VXGE_HW_BADCFG_FIFO_BLOCKS = VXGE_HW_BASE_BADCFG + 2,
VXGE_HW_BADCFG_VPATH_MTU = VXGE_HW_BASE_BADCFG + 3,
VXGE_HW_BADCFG_VPATH_RPA_STRIP_VLAN_TAG = VXGE_HW_BASE_BADCFG + 4,
VXGE_HW_BADCFG_VPATH_MIN_BANDWIDTH = VXGE_HW_BASE_BADCFG + 5,
VXGE_HW_BADCFG_INTR_MODE = VXGE_HW_BASE_BADCFG + 6,
VXGE_HW_BADCFG_RTS_MAC_EN = VXGE_HW_BASE_BADCFG + 7,
VXGE_HW_EOF_TRACE_BUF = -1
};
/**
* enum enum vxge_hw_device_link_state - Link state enumeration.
* @VXGE_HW_LINK_NONE: Invalid link state.
* @VXGE_HW_LINK_DOWN: Link is down.
* @VXGE_HW_LINK_UP: Link is up.
*
*/
enum vxge_hw_device_link_state {
VXGE_HW_LINK_NONE,
VXGE_HW_LINK_DOWN,
VXGE_HW_LINK_UP
};
/**
* enum enum vxge_hw_fw_upgrade_code - FW upgrade return codes.
* @VXGE_HW_FW_UPGRADE_OK: All OK send next 16 bytes
* @VXGE_HW_FW_UPGRADE_DONE: upload completed
* @VXGE_HW_FW_UPGRADE_ERR: upload error
* @VXGE_FW_UPGRADE_BYTES2SKIP: skip bytes in the stream
*
*/
enum vxge_hw_fw_upgrade_code {
VXGE_HW_FW_UPGRADE_OK = 0,
VXGE_HW_FW_UPGRADE_DONE = 1,
VXGE_HW_FW_UPGRADE_ERR = 2,
VXGE_FW_UPGRADE_BYTES2SKIP = 3
};
/**
* enum enum vxge_hw_fw_upgrade_err_code - FW upgrade error codes.
* @VXGE_HW_FW_UPGRADE_ERR_CORRUPT_DATA_1: corrupt data
* @VXGE_HW_FW_UPGRADE_ERR_BUFFER_OVERFLOW: buffer overflow
* @VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_3: invalid .ncf file
* @VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_4: invalid .ncf file
* @VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_5: invalid .ncf file
* @VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_6: invalid .ncf file
* @VXGE_HW_FW_UPGRADE_ERR_CORRUPT_DATA_7: corrupt data
* @VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_8: invalid .ncf file
* @VXGE_HW_FW_UPGRADE_ERR_GENERIC_ERROR_UNKNOWN: generic error unknown type
* @VXGE_HW_FW_UPGRADE_ERR_FAILED_TO_FLASH: failed to flash image check failed
*/
enum vxge_hw_fw_upgrade_err_code {
VXGE_HW_FW_UPGRADE_ERR_CORRUPT_DATA_1 = 1,
VXGE_HW_FW_UPGRADE_ERR_BUFFER_OVERFLOW = 2,
VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_3 = 3,
VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_4 = 4,
VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_5 = 5,
VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_6 = 6,
VXGE_HW_FW_UPGRADE_ERR_CORRUPT_DATA_7 = 7,
VXGE_HW_FW_UPGRADE_ERR_INV_NCF_FILE_8 = 8,
VXGE_HW_FW_UPGRADE_ERR_GENERIC_ERROR_UNKNOWN = 9,
VXGE_HW_FW_UPGRADE_ERR_FAILED_TO_FLASH = 10
};
/**
* struct vxge_hw_device_date - Date Format
* @day: Day
* @month: Month
* @year: Year
* @date: Date in string format
*
* Structure for returning date
*/
#define VXGE_HW_FW_STRLEN 32
struct vxge_hw_device_date {
u32 day;
u32 month;
u32 year;
char date[VXGE_HW_FW_STRLEN];
};
struct vxge_hw_device_version {
u32 major;
u32 minor;
u32 build;
char version[VXGE_HW_FW_STRLEN];
};
/**
* struct vxge_hw_fifo_config - Configuration of fifo.
* @enable: Is this fifo to be commissioned
* @fifo_blocks: Numbers of TxDL (that is, lists of Tx descriptors)
* blocks per queue.
* @max_frags: Max number of Tx buffers per TxDL (that is, per single
* transmit operation).
* No more than 256 transmit buffers can be specified.
* @memblock_size: Fifo descriptors are allocated in blocks of @mem_block_size
* bytes. Setting @memblock_size to page size ensures
* by-page allocation of descriptors. 128K bytes is the
* maximum supported block size.
* @alignment_size: per Tx fragment DMA-able memory used to align transmit data
* (e.g., to align on a cache line).
* @intr: Boolean. Use 1 to generate interrupt for each completed TxDL.
* Use 0 otherwise.
* @no_snoop_bits: If non-zero, specifies no-snoop PCI operation,
* which generally improves latency of the host bridge operation
* (see PCI specification). For valid values please refer
* to struct vxge_hw_fifo_config{} in the driver sources.
* Configuration of all Titan fifos.
* Note: Valid (min, max) range for each attribute is specified in the body of
* the struct vxge_hw_fifo_config{} structure.
*/
struct vxge_hw_fifo_config {
u32 enable;
#define VXGE_HW_FIFO_ENABLE 1
#define VXGE_HW_FIFO_DISABLE 0
u32 fifo_blocks;
#define VXGE_HW_MIN_FIFO_BLOCKS 2
#define VXGE_HW_MAX_FIFO_BLOCKS 128
u32 max_frags;
#define VXGE_HW_MIN_FIFO_FRAGS 1
#define VXGE_HW_MAX_FIFO_FRAGS 256
u32 memblock_size;
#define VXGE_HW_MIN_FIFO_MEMBLOCK_SIZE VXGE_HW_BLOCK_SIZE
#define VXGE_HW_MAX_FIFO_MEMBLOCK_SIZE 131072
#define VXGE_HW_DEF_FIFO_MEMBLOCK_SIZE 8096
u32 alignment_size;
#define VXGE_HW_MIN_FIFO_ALIGNMENT_SIZE 0
#define VXGE_HW_MAX_FIFO_ALIGNMENT_SIZE 65536
#define VXGE_HW_DEF_FIFO_ALIGNMENT_SIZE VXGE_CACHE_LINE_SIZE
u32 intr;
#define VXGE_HW_FIFO_QUEUE_INTR_ENABLE 1
#define VXGE_HW_FIFO_QUEUE_INTR_DISABLE 0
#define VXGE_HW_FIFO_QUEUE_INTR_DEFAULT 0
u32 no_snoop_bits;
#define VXGE_HW_FIFO_NO_SNOOP_DISABLED 0
#define VXGE_HW_FIFO_NO_SNOOP_TXD 1
#define VXGE_HW_FIFO_NO_SNOOP_FRM 2
#define VXGE_HW_FIFO_NO_SNOOP_ALL 3
#define VXGE_HW_FIFO_NO_SNOOP_DEFAULT 0
};
/**
* struct vxge_hw_ring_config - Ring configurations.
* @enable: Is this ring to be commissioned
* @ring_blocks: Numbers of RxD blocks in the ring
* @buffer_mode: Receive buffer mode (1, 2, 3, or 5); for details please refer
* to Titan User Guide.
* @scatter_mode: Titan supports two receive scatter modes: A and B.
* For details please refer to Titan User Guide.
* @rx_timer_val: The number of 32ns periods that would be counted between two
* timer interrupts.
* @greedy_return: If Set it forces the device to return absolutely all RxD
* that are consumed and still on board when a timer interrupt
* triggers. If Clear, then if the device has already returned
* RxD before current timer interrupt triggered and after the
* previous timer interrupt triggered, then the device is not
* forced to returned the rest of the consumed RxD that it has
* on board which account for a byte count less than the one
* programmed into PRC_CFG6.RXD_CRXDT field
* @rx_timer_ci: TBD
* @backoff_interval_us: Time (in microseconds), after which Titan
* tries to download RxDs posted by the host.
* Note that the "backoff" does not happen if host posts receive
* descriptors in the timely fashion.
* Ring configuration.
*/
struct vxge_hw_ring_config {
u32 enable;
#define VXGE_HW_RING_ENABLE 1
#define VXGE_HW_RING_DISABLE 0
#define VXGE_HW_RING_DEFAULT 1
u32 ring_blocks;
#define VXGE_HW_MIN_RING_BLOCKS 1
#define VXGE_HW_MAX_RING_BLOCKS 128
#define VXGE_HW_DEF_RING_BLOCKS 2
u32 buffer_mode;
#define VXGE_HW_RING_RXD_BUFFER_MODE_1 1
#define VXGE_HW_RING_RXD_BUFFER_MODE_3 3
#define VXGE_HW_RING_RXD_BUFFER_MODE_5 5
#define VXGE_HW_RING_RXD_BUFFER_MODE_DEFAULT 1
u32 scatter_mode;
#define VXGE_HW_RING_SCATTER_MODE_A 0
#define VXGE_HW_RING_SCATTER_MODE_B 1
#define VXGE_HW_RING_SCATTER_MODE_C 2
#define VXGE_HW_RING_SCATTER_MODE_USE_FLASH_DEFAULT 0xffffffff
u64 rxds_limit;
#define VXGE_HW_DEF_RING_RXDS_LIMIT 44
};
/**
* struct vxge_hw_vp_config - Configuration of virtual path
* @vp_id: Virtual Path Id
* @min_bandwidth: Minimum Guaranteed bandwidth
* @ring: See struct vxge_hw_ring_config{}.
* @fifo: See struct vxge_hw_fifo_config{}.
* @tti: Configuration of interrupt associated with Transmit.
* see struct vxge_hw_tim_intr_config();
* @rti: Configuration of interrupt associated with Receive.
* see struct vxge_hw_tim_intr_config();
* @mtu: mtu size used on this port.
* @rpa_strip_vlan_tag: Strip VLAN Tag enable/disable. Instructs the device to
* remove the VLAN tag from all received tagged frames that are not
* replicated at the internal L2 switch.
* 0 - Do not strip the VLAN tag.
* 1 - Strip the VLAN tag. Regardless of this setting, VLAN tags are
* always placed into the RxDMA descriptor.
*
* This structure is used by the driver to pass the configuration parameters to
* configure Virtual Path.
*/
struct vxge_hw_vp_config {
u32 vp_id;
#define VXGE_HW_VPATH_PRIORITY_MIN 0
#define VXGE_HW_VPATH_PRIORITY_MAX 16
#define VXGE_HW_VPATH_PRIORITY_DEFAULT 0
u32 min_bandwidth;
#define VXGE_HW_VPATH_BANDWIDTH_MIN 0
#define VXGE_HW_VPATH_BANDWIDTH_MAX 100
#define VXGE_HW_VPATH_BANDWIDTH_DEFAULT 0
struct vxge_hw_ring_config ring;
struct vxge_hw_fifo_config fifo;
struct vxge_hw_tim_intr_config tti;
struct vxge_hw_tim_intr_config rti;
u32 mtu;
#define VXGE_HW_VPATH_MIN_INITIAL_MTU VXGE_HW_MIN_MTU
#define VXGE_HW_VPATH_MAX_INITIAL_MTU VXGE_HW_MAX_MTU
#define VXGE_HW_VPATH_USE_FLASH_DEFAULT_INITIAL_MTU 0xffffffff
u32 rpa_strip_vlan_tag;
#define VXGE_HW_VPATH_RPA_STRIP_VLAN_TAG_ENABLE 1
#define VXGE_HW_VPATH_RPA_STRIP_VLAN_TAG_DISABLE 0
#define VXGE_HW_VPATH_RPA_STRIP_VLAN_TAG_USE_FLASH_DEFAULT 0xffffffff
};
/**
* struct vxge_hw_device_config - Device configuration.
* @dma_blockpool_initial: Initial size of DMA Pool
* @dma_blockpool_max: Maximum blocks in DMA pool
* @intr_mode: Line, or MSI-X interrupt.
*
* @rth_en: Enable Receive Traffic Hashing(RTH) using IT(Indirection Table).
* @rth_it_type: RTH IT table programming type
* @rts_mac_en: Enable Receive Traffic Steering using MAC destination address
* @vp_config: Configuration for virtual paths
* @device_poll_millis: Specify the interval (in mulliseconds)
* to wait for register reads
*
* Titan configuration.
* Contains per-device configuration parameters, including:
* - stats sampling interval, etc.
*
* In addition, struct vxge_hw_device_config{} includes "subordinate"
* configurations, including:
* - fifos and rings;
* - MAC (done at firmware level).
*
* See Titan User Guide for more details.
* Note: Valid (min, max) range for each attribute is specified in the body of
* the struct vxge_hw_device_config{} structure. Please refer to the
* corresponding include file.
* See also: struct vxge_hw_tim_intr_config{}.
*/
struct vxge_hw_device_config {
u32 device_poll_millis;
#define VXGE_HW_MIN_DEVICE_POLL_MILLIS 1
#define VXGE_HW_MAX_DEVICE_POLL_MILLIS 100000
#define VXGE_HW_DEF_DEVICE_POLL_MILLIS 1000
u32 dma_blockpool_initial;
u32 dma_blockpool_max;
#define VXGE_HW_MIN_DMA_BLOCK_POOL_SIZE 0
#define VXGE_HW_INITIAL_DMA_BLOCK_POOL_SIZE 0
#define VXGE_HW_INCR_DMA_BLOCK_POOL_SIZE 4
#define VXGE_HW_MAX_DMA_BLOCK_POOL_SIZE 4096
#define VXGE_HW_MAX_PAYLOAD_SIZE_512 2
u32 intr_mode:2,
#define VXGE_HW_INTR_MODE_IRQLINE 0
#define VXGE_HW_INTR_MODE_MSIX 1
#define VXGE_HW_INTR_MODE_MSIX_ONE_SHOT 2
#define VXGE_HW_INTR_MODE_DEF 0
rth_en:1,
#define VXGE_HW_RTH_DISABLE 0
#define VXGE_HW_RTH_ENABLE 1
#define VXGE_HW_RTH_DEFAULT 0
rth_it_type:1,
#define VXGE_HW_RTH_IT_TYPE_SOLO_IT 0
#define VXGE_HW_RTH_IT_TYPE_MULTI_IT 1
#define VXGE_HW_RTH_IT_TYPE_DEFAULT 0
rts_mac_en:1,
#define VXGE_HW_RTS_MAC_DISABLE 0
#define VXGE_HW_RTS_MAC_ENABLE 1
#define VXGE_HW_RTS_MAC_DEFAULT 0
hwts_en:1;
#define VXGE_HW_HWTS_DISABLE 0
#define VXGE_HW_HWTS_ENABLE 1
#define VXGE_HW_HWTS_DEFAULT 1
struct vxge_hw_vp_config vp_config[VXGE_HW_MAX_VIRTUAL_PATHS];
};
/**
* function vxge_uld_link_up_f - Link-Up callback provided by driver.
* @devh: HW device handle.
* Link-up notification callback provided by the driver.
* This is one of the per-driver callbacks, see struct vxge_hw_uld_cbs{}.
*
* See also: struct vxge_hw_uld_cbs{}, vxge_uld_link_down_f{},
* vxge_hw_driver_initialize().
*/
/**
* function vxge_uld_link_down_f - Link-Down callback provided by
* driver.
* @devh: HW device handle.
*
* Link-Down notification callback provided by the driver.
* This is one of the per-driver callbacks, see struct vxge_hw_uld_cbs{}.
*
* See also: struct vxge_hw_uld_cbs{}, vxge_uld_link_up_f{},
* vxge_hw_driver_initialize().
*/
/**
* function vxge_uld_crit_err_f - Critical Error notification callback.
* @devh: HW device handle.
* (typically - at HW device iinitialization time).
* @type: Enumerated hw error, e.g.: double ECC.
* @serr_data: Titan status.
* @ext_data: Extended data. The contents depends on the @type.
*
* Link-Down notification callback provided by the driver.
* This is one of the per-driver callbacks, see struct vxge_hw_uld_cbs{}.
*
* See also: struct vxge_hw_uld_cbs{}, enum vxge_hw_event{},
* vxge_hw_driver_initialize().
*/
/**
* struct vxge_hw_uld_cbs - driver "slow-path" callbacks.
* @link_up: See vxge_uld_link_up_f{}.
* @link_down: See vxge_uld_link_down_f{}.
* @crit_err: See vxge_uld_crit_err_f{}.
*
* Driver slow-path (per-driver) callbacks.
* Implemented by driver and provided to HW via
* vxge_hw_driver_initialize().
* Note that these callbacks are not mandatory: HW will not invoke
* a callback if NULL is specified.
*
* See also: vxge_hw_driver_initialize().
*/
struct vxge_hw_uld_cbs {
void (*link_up)(struct __vxge_hw_device *devh);
void (*link_down)(struct __vxge_hw_device *devh);
void (*crit_err)(struct __vxge_hw_device *devh,
enum vxge_hw_event type, u64 ext_data);
};
/*
* struct __vxge_hw_blockpool_entry - Block private data structure
* @item: List header used to link.
* @length: Length of the block
* @memblock: Virtual address block
* @dma_addr: DMA Address of the block.
* @dma_handle: DMA handle of the block.
* @acc_handle: DMA acc handle
*
* Block is allocated with a header to put the blocks into list.
*
*/
struct __vxge_hw_blockpool_entry {
struct list_head item;
u32 length;
void *memblock;
dma_addr_t dma_addr;
struct pci_dev *dma_handle;
struct pci_dev *acc_handle;
};
/*
* struct __vxge_hw_blockpool - Block Pool
* @hldev: HW device
* @block_size: size of each block.
* @Pool_size: Number of blocks in the pool
* @pool_max: Maximum number of blocks above which to free additional blocks
* @req_out: Number of block requests with OS out standing
* @free_block_list: List of free blocks
*
* Block pool contains the DMA blocks preallocated.
*
*/
struct __vxge_hw_blockpool {
struct __vxge_hw_device *hldev;
u32 block_size;
u32 pool_size;
u32 pool_max;
u32 req_out;
struct list_head free_block_list;
struct list_head free_entry_list;
};
/*
* enum enum __vxge_hw_channel_type - Enumerated channel types.
* @VXGE_HW_CHANNEL_TYPE_UNKNOWN: Unknown channel.
* @VXGE_HW_CHANNEL_TYPE_FIFO: fifo.
* @VXGE_HW_CHANNEL_TYPE_RING: ring.
* @VXGE_HW_CHANNEL_TYPE_MAX: Maximum number of HW-supported
* (and recognized) channel types. Currently: 2.
*
* Enumerated channel types. Currently there are only two link-layer
* channels - Titan fifo and Titan ring. In the future the list will grow.
*/
enum __vxge_hw_channel_type {
VXGE_HW_CHANNEL_TYPE_UNKNOWN = 0,
VXGE_HW_CHANNEL_TYPE_FIFO = 1,
VXGE_HW_CHANNEL_TYPE_RING = 2,
VXGE_HW_CHANNEL_TYPE_MAX = 3
};
/*
* struct __vxge_hw_channel
* @item: List item; used to maintain a list of open channels.
* @type: Channel type. See enum vxge_hw_channel_type{}.
* @devh: Device handle. HW device object that contains _this_ channel.
* @vph: Virtual path handle. Virtual Path Object that contains _this_ channel.
* @length: Channel length. Currently allocated number of descriptors.
* The channel length "grows" when more descriptors get allocated.
* See _hw_mempool_grow.
* @reserve_arr: Reserve array. Contains descriptors that can be reserved
* by driver for the subsequent send or receive operation.
* See vxge_hw_fifo_txdl_reserve(),
* vxge_hw_ring_rxd_reserve().
* @reserve_ptr: Current pointer in the resrve array
* @reserve_top: Reserve top gives the maximum number of dtrs available in
* reserve array.
* @work_arr: Work array. Contains descriptors posted to the channel.
* Note that at any point in time @work_arr contains 3 types of
* descriptors:
* 1) posted but not yet consumed by Titan device;
* 2) consumed but not yet completed;
* 3) completed but not yet freed
* (via vxge_hw_fifo_txdl_free() or vxge_hw_ring_rxd_free())
* @post_index: Post index. At any point in time points on the
* position in the channel, which'll contain next to-be-posted
* descriptor.
* @compl_index: Completion index. At any point in time points on the
* position in the channel, which will contain next
* to-be-completed descriptor.
* @free_arr: Free array. Contains completed descriptors that were freed
* (i.e., handed over back to HW) by driver.
* See vxge_hw_fifo_txdl_free(), vxge_hw_ring_rxd_free().
* @free_ptr: current pointer in free array
* @per_dtr_space: Per-descriptor space (in bytes) that channel user can utilize
* to store per-operation control information.
* @stats: Pointer to common statistics
* @userdata: Per-channel opaque (void*) user-defined context, which may be
* driver object, ULP connection, etc.
* Once channel is open, @userdata is passed back to user via
* vxge_hw_channel_callback_f.
*
* HW channel object.
*
* See also: enum vxge_hw_channel_type{}, enum vxge_hw_channel_flag
*/
struct __vxge_hw_channel {
struct list_head item;
enum __vxge_hw_channel_type type;
struct __vxge_hw_device *devh;
struct __vxge_hw_vpath_handle *vph;
u32 length;
u32 vp_id;
void **reserve_arr;
u32 reserve_ptr;
u32 reserve_top;
void **work_arr;
u32 post_index ____cacheline_aligned;
u32 compl_index ____cacheline_aligned;
void **free_arr;
u32 free_ptr;
void **orig_arr;
u32 per_dtr_space;
void *userdata;
struct vxge_hw_common_reg __iomem *common_reg;
u32 first_vp_id;
struct vxge_hw_vpath_stats_sw_common_info *stats;
} ____cacheline_aligned;
/*
* struct __vxge_hw_virtualpath - Virtual Path
*
* @vp_id: Virtual path id
* @vp_open: This flag specifies if vxge_hw_vp_open is called from LL Driver
* @hldev: Hal device
* @vp_config: Virtual Path Config
* @vp_reg: VPATH Register map address in BAR0
* @vpmgmt_reg: VPATH_MGMT register map address
* @max_mtu: Max mtu that can be supported
* @vsport_number: vsport attached to this vpath
* @max_kdfc_db: Maximum kernel mode doorbells
* @max_nofl_db: Maximum non offload doorbells
* @tx_intr_num: Interrupt Number associated with the TX
* @ringh: Ring Queue
* @fifoh: FIFO Queue
* @vpath_handles: Virtual Path handles list
* @stats_block: Memory for DMAing stats
* @stats: Vpath statistics
*
* Virtual path structure to encapsulate the data related to a virtual path.
* Virtual paths are allocated by the HW upon getting configuration from the
* driver and inserted into the list of virtual paths.
*/
struct __vxge_hw_virtualpath {
u32 vp_id;
u32 vp_open;
#define VXGE_HW_VP_NOT_OPEN 0
#define VXGE_HW_VP_OPEN 1
struct __vxge_hw_device *hldev;
struct vxge_hw_vp_config *vp_config;
struct vxge_hw_vpath_reg __iomem *vp_reg;
struct vxge_hw_vpmgmt_reg __iomem *vpmgmt_reg;
struct __vxge_hw_non_offload_db_wrapper __iomem *nofl_db;
u32 max_mtu;
u32 vsport_number;
u32 max_kdfc_db;
u32 max_nofl_db;
u64 tim_tti_cfg1_saved;
u64 tim_tti_cfg3_saved;
u64 tim_rti_cfg1_saved;
u64 tim_rti_cfg3_saved;
struct __vxge_hw_ring *____cacheline_aligned ringh;
struct __vxge_hw_fifo *____cacheline_aligned fifoh;
struct list_head vpath_handles;
struct __vxge_hw_blockpool_entry *stats_block;
struct vxge_hw_vpath_stats_hw_info *hw_stats;
struct vxge_hw_vpath_stats_hw_info *hw_stats_sav;
struct vxge_hw_vpath_stats_sw_info *sw_stats;
spinlock_t lock;
};
/*
* struct __vxge_hw_vpath_handle - List item to store callback information
* @item: List head to keep the item in linked list
* @vpath: Virtual path to which this item belongs
*
* This structure is used to store the callback information.
*/
struct __vxge_hw_vpath_handle {
struct list_head item;
struct __vxge_hw_virtualpath *vpath;
};
/*
* struct __vxge_hw_device
*
* HW device object.
*/
/**
* struct __vxge_hw_device - Hal device object
* @magic: Magic Number
* @bar0: BAR0 virtual address.
* @pdev: Physical device handle
* @config: Confguration passed by the LL driver at initialization
* @link_state: Link state
*
* HW device object. Represents Titan adapter
*/
struct __vxge_hw_device {
u32 magic;
#define VXGE_HW_DEVICE_MAGIC 0x12345678
#define VXGE_HW_DEVICE_DEAD 0xDEADDEAD
void __iomem *bar0;
struct pci_dev *pdev;
struct net_device *ndev;
struct vxge_hw_device_config config;
enum vxge_hw_device_link_state link_state;
const struct vxge_hw_uld_cbs *uld_callbacks;
u32 host_type;
u32 func_id;
u32 access_rights;
#define VXGE_HW_DEVICE_ACCESS_RIGHT_VPATH 0x1
#define VXGE_HW_DEVICE_ACCESS_RIGHT_SRPCIM 0x2
#define VXGE_HW_DEVICE_ACCESS_RIGHT_MRPCIM 0x4
struct vxge_hw_legacy_reg __iomem *legacy_reg;
struct vxge_hw_toc_reg __iomem *toc_reg;
struct vxge_hw_common_reg __iomem *common_reg;
struct vxge_hw_mrpcim_reg __iomem *mrpcim_reg;
struct vxge_hw_srpcim_reg __iomem *srpcim_reg \
[VXGE_HW_TITAN_SRPCIM_REG_SPACES];
struct vxge_hw_vpmgmt_reg __iomem *vpmgmt_reg \
[VXGE_HW_TITAN_VPMGMT_REG_SPACES];
struct vxge_hw_vpath_reg __iomem *vpath_reg \
[VXGE_HW_TITAN_VPATH_REG_SPACES];
u8 __iomem *kdfc;
u8 __iomem *usdc;
struct __vxge_hw_virtualpath virtual_paths \
[VXGE_HW_MAX_VIRTUAL_PATHS];
u64 vpath_assignments;
u64 vpaths_deployed;
u32 first_vp_id;
u64 tim_int_mask0[4];
u32 tim_int_mask1[4];
struct __vxge_hw_blockpool block_pool;
struct vxge_hw_device_stats stats;
u32 debug_module_mask;
u32 debug_level;
u32 level_err;
u32 level_trace;
u16 eprom_versions[VXGE_HW_MAX_ROM_IMAGES];
};
#define VXGE_HW_INFO_LEN 64
/**
* struct vxge_hw_device_hw_info - Device information
* @host_type: Host Type
* @func_id: Function Id
* @vpath_mask: vpath bit mask
* @fw_version: Firmware version
* @fw_date: Firmware Date
* @flash_version: Firmware version
* @flash_date: Firmware Date
* @mac_addrs: Mac addresses for each vpath
* @mac_addr_masks: Mac address masks for each vpath
*
* Returns the vpath mask that has the bits set for each vpath allocated
* for the driver and the first mac address for each vpath
*/
struct vxge_hw_device_hw_info {
u32 host_type;
#define VXGE_HW_NO_MR_NO_SR_NORMAL_FUNCTION 0
#define VXGE_HW_MR_NO_SR_VH0_BASE_FUNCTION 1
#define VXGE_HW_NO_MR_SR_VH0_FUNCTION0 2
#define VXGE_HW_NO_MR_SR_VH0_VIRTUAL_FUNCTION 3
#define VXGE_HW_MR_SR_VH0_INVALID_CONFIG 4
#define VXGE_HW_SR_VH_FUNCTION0 5
#define VXGE_HW_SR_VH_VIRTUAL_FUNCTION 6
#define VXGE_HW_VH_NORMAL_FUNCTION 7
u64 function_mode;
#define VXGE_HW_FUNCTION_MODE_SINGLE_FUNCTION 0
#define VXGE_HW_FUNCTION_MODE_MULTI_FUNCTION 1
#define VXGE_HW_FUNCTION_MODE_SRIOV 2
#define VXGE_HW_FUNCTION_MODE_MRIOV 3
#define VXGE_HW_FUNCTION_MODE_MRIOV_8 4
#define VXGE_HW_FUNCTION_MODE_MULTI_FUNCTION_17 5
#define VXGE_HW_FUNCTION_MODE_SRIOV_8 6
#define VXGE_HW_FUNCTION_MODE_SRIOV_4 7
#define VXGE_HW_FUNCTION_MODE_MULTI_FUNCTION_2 8
#define VXGE_HW_FUNCTION_MODE_MULTI_FUNCTION_4 9
#define VXGE_HW_FUNCTION_MODE_MRIOV_4 10
u32 func_id;
u64 vpath_mask;
struct vxge_hw_device_version fw_version;
struct vxge_hw_device_date fw_date;
struct vxge_hw_device_version flash_version;
struct vxge_hw_device_date flash_date;
u8 serial_number[VXGE_HW_INFO_LEN];
u8 part_number[VXGE_HW_INFO_LEN];
u8 product_desc[VXGE_HW_INFO_LEN];
u8 mac_addrs[VXGE_HW_MAX_VIRTUAL_PATHS][ETH_ALEN];
u8 mac_addr_masks[VXGE_HW_MAX_VIRTUAL_PATHS][ETH_ALEN];
};
/**
* struct vxge_hw_device_attr - Device memory spaces.
* @bar0: BAR0 virtual address.
* @pdev: PCI device object.
*
* Device memory spaces. Includes configuration, BAR0 etc. per device
* mapped memories. Also, includes a pointer to OS-specific PCI device object.
*/
struct vxge_hw_device_attr {
void __iomem *bar0;
struct pci_dev *pdev;
const struct vxge_hw_uld_cbs *uld_callbacks;
};
#define VXGE_HW_DEVICE_LINK_STATE_SET(hldev, ls) (hldev->link_state = ls)
#define VXGE_HW_DEVICE_TIM_INT_MASK_SET(m0, m1, i) { \
if (i < 16) { \
m0[0] |= vxge_vBIT(0x8, (i*4), 4); \
m0[1] |= vxge_vBIT(0x4, (i*4), 4); \
} \
else { \
m1[0] = 0x80000000; \
m1[1] = 0x40000000; \
} \
}
#define VXGE_HW_DEVICE_TIM_INT_MASK_RESET(m0, m1, i) { \
if (i < 16) { \
m0[0] &= ~vxge_vBIT(0x8, (i*4), 4); \
m0[1] &= ~vxge_vBIT(0x4, (i*4), 4); \
} \
else { \
m1[0] = 0; \
m1[1] = 0; \
} \
}
#define VXGE_HW_DEVICE_STATS_PIO_READ(loc, offset) { \
status = vxge_hw_mrpcim_stats_access(hldev, \
VXGE_HW_STATS_OP_READ, \
loc, \
offset, \
&val64); \
if (status != VXGE_HW_OK) \
return status; \
}
/*
* struct __vxge_hw_ring - Ring channel.
* @channel: Channel "base" of this ring, the common part of all HW
* channels.
* @mempool: Memory pool, the pool from which descriptors get allocated.
* (See vxge_hw_mm.h).
* @config: Ring configuration, part of device configuration
* (see struct vxge_hw_device_config{}).
* @ring_length: Length of the ring
* @buffer_mode: 1, 3, or 5. The value specifies a receive buffer mode,
* as per Titan User Guide.
* @rxd_size: RxD sizes for 1-, 3- or 5- buffer modes. As per Titan spec,
* 1-buffer mode descriptor is 32 byte long, etc.
* @rxd_priv_size: Per RxD size reserved (by HW) for driver to keep
* per-descriptor data (e.g., DMA handle for Solaris)
* @per_rxd_space: Per rxd space requested by driver
* @rxds_per_block: Number of descriptors per hardware-defined RxD
* block. Depends on the (1-, 3-, 5-) buffer mode.
* @rxdblock_priv_size: Reserved at the end of each RxD block. HW internal
* usage. Not to confuse with @rxd_priv_size.
* @cmpl_cnt: Completion counter. Is reset to zero upon entering the ISR.
* @callback: Channel completion callback. HW invokes the callback when there
* are new completions on that channel. In many implementations
* the @callback executes in the hw interrupt context.
* @rxd_init: Channel's descriptor-initialize callback.
* See vxge_hw_ring_rxd_init_f{}.
* If not NULL, HW invokes the callback when opening
* the ring.
* @rxd_term: Channel's descriptor-terminate callback. If not NULL,
* HW invokes the callback when closing the corresponding channel.
* See also vxge_hw_channel_rxd_term_f{}.
* @stats: Statistics for ring
* Ring channel.
*
* Note: The structure is cache line aligned to better utilize
* CPU cache performance.
*/
struct __vxge_hw_ring {
struct __vxge_hw_channel channel;
struct vxge_hw_mempool *mempool;
struct vxge_hw_vpath_reg __iomem *vp_reg;
struct vxge_hw_common_reg __iomem *common_reg;
u32 ring_length;
u32 buffer_mode;
u32 rxd_size;
u32 rxd_priv_size;
u32 per_rxd_space;
u32 rxds_per_block;
u32 rxdblock_priv_size;
u32 cmpl_cnt;
u32 vp_id;
u32 doorbell_cnt;
u32 total_db_cnt;
u64 rxds_limit;
u32 rtimer;
u64 tim_rti_cfg1_saved;
u64 tim_rti_cfg3_saved;
enum vxge_hw_status (*callback)(
struct __vxge_hw_ring *ringh,
void *rxdh,
u8 t_code,
void *userdata);
enum vxge_hw_status (*rxd_init)(
void *rxdh,
void *userdata);
void (*rxd_term)(
void *rxdh,
enum vxge_hw_rxd_state state,
void *userdata);
struct vxge_hw_vpath_stats_sw_ring_info *stats ____cacheline_aligned;
struct vxge_hw_ring_config *config;
} ____cacheline_aligned;
/**
* enum enum vxge_hw_txdl_state - Descriptor (TXDL) state.
* @VXGE_HW_TXDL_STATE_NONE: Invalid state.
* @VXGE_HW_TXDL_STATE_AVAIL: Descriptor is available for reservation.
* @VXGE_HW_TXDL_STATE_POSTED: Descriptor is posted for processing by the
* device.
* @VXGE_HW_TXDL_STATE_FREED: Descriptor is free and can be reused for
* filling-in and posting later.
*
* Titan/HW descriptor states.
*
*/
enum vxge_hw_txdl_state {
VXGE_HW_TXDL_STATE_NONE = 0,
VXGE_HW_TXDL_STATE_AVAIL = 1,
VXGE_HW_TXDL_STATE_POSTED = 2,
VXGE_HW_TXDL_STATE_FREED = 3
};
/*
* struct __vxge_hw_fifo - Fifo.
* @channel: Channel "base" of this fifo, the common part of all HW
* channels.
* @mempool: Memory pool, from which descriptors get allocated.
* @config: Fifo configuration, part of device configuration
* (see struct vxge_hw_device_config{}).
* @interrupt_type: Interrupt type to be used
* @no_snoop_bits: See struct vxge_hw_fifo_config{}.
* @txdl_per_memblock: Number of TxDLs (TxD lists) per memblock.
* on TxDL please refer to Titan UG.
* @txdl_size: Configured TxDL size (i.e., number of TxDs in a list), plus
* per-TxDL HW private space (struct __vxge_hw_fifo_txdl_priv).
* @priv_size: Per-Tx descriptor space reserved for driver
* usage.
* @per_txdl_space: Per txdl private space for the driver
* @callback: Fifo completion callback. HW invokes the callback when there
* are new completions on that fifo. In many implementations
* the @callback executes in the hw interrupt context.
* @txdl_term: Fifo's descriptor-terminate callback. If not NULL,
* HW invokes the callback when closing the corresponding fifo.
* See also vxge_hw_fifo_txdl_term_f{}.
* @stats: Statistics of this fifo
*
* Fifo channel.
* Note: The structure is cache line aligned.
*/
struct __vxge_hw_fifo {
struct __vxge_hw_channel channel;
struct vxge_hw_mempool *mempool;
struct vxge_hw_fifo_config *config;
struct vxge_hw_vpath_reg __iomem *vp_reg;
struct __vxge_hw_non_offload_db_wrapper __iomem *nofl_db;
u64 interrupt_type;
u32 no_snoop_bits;
u32 txdl_per_memblock;
u32 txdl_size;
u32 priv_size;
u32 per_txdl_space;
u32 vp_id;
u32 tx_intr_num;
u32 rtimer;
u64 tim_tti_cfg1_saved;
u64 tim_tti_cfg3_saved;
enum vxge_hw_status (*callback)(
struct __vxge_hw_fifo *fifo_handle,
void *txdlh,
enum vxge_hw_fifo_tcode t_code,
void *userdata,
struct sk_buff ***skb_ptr,
int nr_skb,
int *more);
void (*txdl_term)(
void *txdlh,
enum vxge_hw_txdl_state state,
void *userdata);
struct vxge_hw_vpath_stats_sw_fifo_info *stats ____cacheline_aligned;
} ____cacheline_aligned;
/*
* struct __vxge_hw_fifo_txdl_priv - Transmit descriptor HW-private data.
* @dma_addr: DMA (mapped) address of _this_ descriptor.
* @dma_handle: DMA handle used to map the descriptor onto device.
* @dma_offset: Descriptor's offset in the memory block. HW allocates
* descriptors in memory blocks (see struct vxge_hw_fifo_config{})
* Each memblock is a contiguous block of DMA-able memory.
* @frags: Total number of fragments (that is, contiguous data buffers)
* carried by this TxDL.
* @align_vaddr_start: Aligned virtual address start
* @align_vaddr: Virtual address of the per-TxDL area in memory used for
* alignement. Used to place one or more mis-aligned fragments
* @align_dma_addr: DMA address translated from the @align_vaddr.
* @align_dma_handle: DMA handle that corresponds to @align_dma_addr.
* @align_dma_acch: DMA access handle corresponds to @align_dma_addr.
* @align_dma_offset: The current offset into the @align_vaddr area.
* Grows while filling the descriptor, gets reset.
* @align_used_frags: Number of fragments used.
* @alloc_frags: Total number of fragments allocated.
* @unused: TODO
* @next_txdl_priv: (TODO).
* @first_txdp: (TODO).
* @linked_txdl_priv: Pointer to any linked TxDL for creating contiguous
* TxDL list.
* @txdlh: Corresponding txdlh to this TxDL.
* @memblock: Pointer to the TxDL memory block or memory page.
* on the next send operation.
* @dma_object: DMA address and handle of the memory block that contains
* the descriptor. This member is used only in the "checked"
* version of the HW (to enforce certain assertions);
* otherwise it gets compiled out.
* @allocated: True if the descriptor is reserved, 0 otherwise. Internal usage.
*
* Per-transmit decsriptor HW-private data. HW uses the space to keep DMA
* information associated with the descriptor. Note that driver can ask HW
* to allocate additional per-descriptor space for its own (driver-specific)
* purposes.
*
* See also: struct vxge_hw_ring_rxd_priv{}.
*/
struct __vxge_hw_fifo_txdl_priv {
dma_addr_t dma_addr;
struct pci_dev *dma_handle;
ptrdiff_t dma_offset;
u32 frags;
u8 *align_vaddr_start;
u8 *align_vaddr;
dma_addr_t align_dma_addr;
struct pci_dev *align_dma_handle;
struct pci_dev *align_dma_acch;
ptrdiff_t align_dma_offset;
u32 align_used_frags;
u32 alloc_frags;
u32 unused;
struct __vxge_hw_fifo_txdl_priv *next_txdl_priv;
struct vxge_hw_fifo_txd *first_txdp;
void *memblock;
};
/*
* struct __vxge_hw_non_offload_db_wrapper - Non-offload Doorbell Wrapper
* @control_0: Bits 0 to 7 - Doorbell type.
* Bits 8 to 31 - Reserved.
* Bits 32 to 39 - The highest TxD in this TxDL.
* Bits 40 to 47 - Reserved.
* Bits 48 to 55 - Reserved.
* Bits 56 to 63 - No snoop flags.
* @txdl_ptr: The starting location of the TxDL in host memory.
*
* Created by the host and written to the adapter via PIO to a Kernel Doorbell
* FIFO. All non-offload doorbell wrapper fields must be written by the host as
* part of a doorbell write. Consumed by the adapter but is not written by the
* adapter.
*/
struct __vxge_hw_non_offload_db_wrapper {
u64 control_0;
#define VXGE_HW_NODBW_GET_TYPE(ctrl0) vxge_bVALn(ctrl0, 0, 8)
#define VXGE_HW_NODBW_TYPE(val) vxge_vBIT(val, 0, 8)
#define VXGE_HW_NODBW_TYPE_NODBW 0
#define VXGE_HW_NODBW_GET_LAST_TXD_NUMBER(ctrl0) vxge_bVALn(ctrl0, 32, 8)
#define VXGE_HW_NODBW_LAST_TXD_NUMBER(val) vxge_vBIT(val, 32, 8)
#define VXGE_HW_NODBW_GET_NO_SNOOP(ctrl0) vxge_bVALn(ctrl0, 56, 8)
#define VXGE_HW_NODBW_LIST_NO_SNOOP(val) vxge_vBIT(val, 56, 8)
#define VXGE_HW_NODBW_LIST_NO_SNOOP_TXD_READ_TXD0_WRITE 0x2
#define VXGE_HW_NODBW_LIST_NO_SNOOP_TX_FRAME_DATA_READ 0x1
u64 txdl_ptr;
};
/*
* TX Descriptor
*/
/**
* struct vxge_hw_fifo_txd - Transmit Descriptor
* @control_0: Bits 0 to 6 - Reserved.
* Bit 7 - List Ownership. This field should be initialized
* to '1' by the driver before the transmit list pointer is
* written to the adapter. This field will be set to '0' by the
* adapter once it has completed transmitting the frame or frames in
* the list. Note - This field is only valid in TxD0. Additionally,
* for multi-list sequences, the driver should not release any
* buffers until the ownership of the last list in the multi-list
* sequence has been returned to the host.
* Bits 8 to 11 - Reserved
* Bits 12 to 15 - Transfer_Code. This field is only valid in
* TxD0. It is used to describe the status of the transmit data
* buffer transfer. This field is always overwritten by the
* adapter, so this field may be initialized to any value.
* Bits 16 to 17 - Host steering. This field allows the host to
* override the selection of the physical transmit port.
* Attention:
* Normal sounds as if learned from the switch rather than from
* the aggregation algorythms.
* 00: Normal. Use Destination/MAC Address
* lookup to determine the transmit port.
* 01: Send on physical Port1.
* 10: Send on physical Port0.
* 11: Send on both ports.
* Bits 18 to 21 - Reserved
* Bits 22 to 23 - Gather_Code. This field is set by the host and
* is used to describe how individual buffers comprise a frame.
* 10: First descriptor of a frame.
* 00: Middle of a multi-descriptor frame.
* 01: Last descriptor of a frame.
* 11: First and last descriptor of a frame (the entire frame
* resides in a single buffer).
* For multi-descriptor frames, the only valid gather code sequence
* is {10, [00], 01}. In other words, the descriptors must be placed
* in the list in the correct order.
* Bits 24 to 27 - Reserved
* Bits 28 to 29 - LSO_Frm_Encap. LSO Frame Encapsulation
* definition. Only valid in TxD0. This field allows the host to
* indicate the Ethernet encapsulation of an outbound LSO packet.
* 00 - classic mode (best guess)
* 01 - LLC
* 10 - SNAP
* 11 - DIX
* If "classic mode" is selected, the adapter will attempt to
* decode the frame's Ethernet encapsulation by examining the L/T
* field as follows:
* <= 0x05DC LLC/SNAP encoding; must examine DSAP/SSAP to determine
* if packet is IPv4 or IPv6.
* 0x8870 Jumbo-SNAP encoding.
* 0x0800 IPv4 DIX encoding
* 0x86DD IPv6 DIX encoding
* others illegal encapsulation
* Bits 30 - LSO_ Flag. Large Send Offload (LSO) flag.
* Set to 1 to perform segmentation offload for TCP/UDP.
* This field is valid only in TxD0.
* Bits 31 to 33 - Reserved.
* Bits 34 to 47 - LSO_MSS. TCP/UDP LSO Maximum Segment Size
* This field is meaningful only when LSO_Control is non-zero.
* When LSO_Control is set to TCP_LSO, the single (possibly large)
* TCP segment described by this TxDL will be sent as a series of
* TCP segments each of which contains no more than LSO_MSS
* payload bytes.
* When LSO_Control is set to UDP_LSO, the single (possibly large)
* UDP datagram described by this TxDL will be sent as a series of
* UDP datagrams each of which contains no more than LSO_MSS
* payload bytes.
* All outgoing frames from this TxDL will have LSO_MSS bytes of UDP
* or TCP payload, with the exception of the last, which will have
* <= LSO_MSS bytes of payload.
* Bits 48 to 63 - Buffer_Size. Number of valid bytes in the
* buffer to be read by the adapter. This field is written by the
* host. A value of 0 is illegal.
* Bits 32 to 63 - This value is written by the adapter upon
* completion of a UDP or TCP LSO operation and indicates the number
* of UDP or TCP payload bytes that were transmitted. 0x0000 will be
* returned for any non-LSO operation.
* @control_1: Bits 0 to 4 - Reserved.
* Bit 5 - Tx_CKO_IPv4 Set to a '1' to enable IPv4 header checksum
* offload. This field is only valid in the first TxD of a frame.
* Bit 6 - Tx_CKO_TCP Set to a '1' to enable TCP checksum offload.
* This field is only valid in the first TxD of a frame (the TxD's
* gather code must be 10 or 11). The driver should only set this
* bit if it can guarantee that TCP is present.
* Bit 7 - Tx_CKO_UDP Set to a '1' to enable UDP checksum offload.
* This field is only valid in the first TxD of a frame (the TxD's
* gather code must be 10 or 11). The driver should only set this
* bit if it can guarantee that UDP is present.
* Bits 8 to 14 - Reserved.
* Bit 15 - Tx_VLAN_Enable VLAN tag insertion flag. Set to a '1' to
* instruct the adapter to insert the VLAN tag specified by the
* Tx_VLAN_Tag field. This field is only valid in the first TxD of
* a frame.
* Bits 16 to 31 - Tx_VLAN_Tag. Variable portion of the VLAN tag
* to be inserted into the frame by the adapter (the first two bytes
* of a VLAN tag are always 0x8100). This field is only valid if the
* Tx_VLAN_Enable field is set to '1'.
* Bits 32 to 33 - Reserved.
* Bits 34 to 39 - Tx_Int_Number. Indicates which Tx interrupt
* number the frame associated with. This field is written by the
* host. It is only valid in the first TxD of a frame.
* Bits 40 to 42 - Reserved.
* Bit 43 - Set to 1 to exclude the frame from bandwidth metering
* functions. This field is valid only in the first TxD
* of a frame.
* Bits 44 to 45 - Reserved.
* Bit 46 - Tx_Int_Per_List Set to a '1' to instruct the adapter to
* generate an interrupt as soon as all of the frames in the list
* have been transmitted. In order to have per-frame interrupts,
* the driver should place a maximum of one frame per list. This
* field is only valid in the first TxD of a frame.
* Bit 47 - Tx_Int_Utilization Set to a '1' to instruct the adapter
* to count the frame toward the utilization interrupt specified in
* the Tx_Int_Number field. This field is only valid in the first
* TxD of a frame.
* Bits 48 to 63 - Reserved.
* @buffer_pointer: Buffer start address.
* @host_control: Host_Control.Opaque 64bit data stored by driver inside the
* Titan descriptor prior to posting the latter on the fifo
* via vxge_hw_fifo_txdl_post().The %host_control is returned as is
* to the driver with each completed descriptor.
*
* Transmit descriptor (TxD).Fifo descriptor contains configured number
* (list) of TxDs. * For more details please refer to Titan User Guide,
* Section 5.4.2 "Transmit Descriptor (TxD) Format".
*/
struct vxge_hw_fifo_txd {
u64 control_0;
#define VXGE_HW_FIFO_TXD_LIST_OWN_ADAPTER vxge_mBIT(7)
#define VXGE_HW_FIFO_TXD_T_CODE_GET(ctrl0) vxge_bVALn(ctrl0, 12, 4)
#define VXGE_HW_FIFO_TXD_T_CODE(val) vxge_vBIT(val, 12, 4)
#define VXGE_HW_FIFO_TXD_T_CODE_UNUSED VXGE_HW_FIFO_T_CODE_UNUSED
#define VXGE_HW_FIFO_TXD_GATHER_CODE(val) vxge_vBIT(val, 22, 2)
#define VXGE_HW_FIFO_TXD_GATHER_CODE_FIRST VXGE_HW_FIFO_GATHER_CODE_FIRST
#define VXGE_HW_FIFO_TXD_GATHER_CODE_LAST VXGE_HW_FIFO_GATHER_CODE_LAST
#define VXGE_HW_FIFO_TXD_LSO_EN vxge_mBIT(30)
#define VXGE_HW_FIFO_TXD_LSO_MSS(val) vxge_vBIT(val, 34, 14)
#define VXGE_HW_FIFO_TXD_BUFFER_SIZE(val) vxge_vBIT(val, 48, 16)
u64 control_1;
#define VXGE_HW_FIFO_TXD_TX_CKO_IPV4_EN vxge_mBIT(5)
#define VXGE_HW_FIFO_TXD_TX_CKO_TCP_EN vxge_mBIT(6)
#define VXGE_HW_FIFO_TXD_TX_CKO_UDP_EN vxge_mBIT(7)
#define VXGE_HW_FIFO_TXD_VLAN_ENABLE vxge_mBIT(15)
#define VXGE_HW_FIFO_TXD_VLAN_TAG(val) vxge_vBIT(val, 16, 16)
#define VXGE_HW_FIFO_TXD_INT_NUMBER(val) vxge_vBIT(val, 34, 6)
#define VXGE_HW_FIFO_TXD_INT_TYPE_PER_LIST vxge_mBIT(46)
#define VXGE_HW_FIFO_TXD_INT_TYPE_UTILZ vxge_mBIT(47)
u64 buffer_pointer;
u64 host_control;
};
/**
* struct vxge_hw_ring_rxd_1 - One buffer mode RxD for ring
* @host_control: This field is exclusively for host use and is "readonly"
* from the adapter's perspective.
* @control_0:Bits 0 to 6 - RTH_Bucket get
* Bit 7 - Own Descriptor ownership bit. This bit is set to 1
* by the host, and is set to 0 by the adapter.
* 0 - Host owns RxD and buffer.
* 1 - The adapter owns RxD and buffer.
* Bit 8 - Fast_Path_Eligible When set, indicates that the
* received frame meets all of the criteria for fast path processing.
* The required criteria are as follows:
* !SYN &
* (Transfer_Code == "Transfer OK") &
* (!Is_IP_Fragment) &
* ((Is_IPv4 & computed_L3_checksum == 0xFFFF) |
* (Is_IPv6)) &
* ((Is_TCP & computed_L4_checksum == 0xFFFF) |
* (Is_UDP & (computed_L4_checksum == 0xFFFF |
* computed _L4_checksum == 0x0000)))
* (same meaning for all RxD buffer modes)
* Bit 9 - L3 Checksum Correct
* Bit 10 - L4 Checksum Correct
* Bit 11 - Reserved
* Bit 12 to 15 - This field is written by the adapter. It is
* used to report the status of the frame transfer to the host.
* 0x0 - Transfer OK
* 0x4 - RDA Failure During Transfer
* 0x5 - Unparseable Packet, such as unknown IPv6 header.
* 0x6 - Frame integrity error (FCS or ECC).
* 0x7 - Buffer Size Error. The provided buffer(s) were not
* appropriately sized and data loss occurred.
* 0x8 - Internal ECC Error. RxD corrupted.
* 0x9 - IPv4 Checksum error
* 0xA - TCP/UDP Checksum error
* 0xF - Unknown Error or Multiple Error. Indicates an
* unknown problem or that more than one of transfer codes is set.
* Bit 16 - SYN The adapter sets this field to indicate that
* the incoming frame contained a TCP segment with its SYN bit
* set and its ACK bit NOT set. (same meaning for all RxD buffer
* modes)
* Bit 17 - Is ICMP
* Bit 18 - RTH_SPDM_HIT Set to 1 if there was a match in the
* Socket Pair Direct Match Table and the frame was steered based
* on SPDM.
* Bit 19 - RTH_IT_HIT Set to 1 if there was a match in the
* Indirection Table and the frame was steered based on hash
* indirection.
* Bit 20 to 23 - RTH_HASH_TYPE Indicates the function (hash
* type) that was used to calculate the hash.
* Bit 19 - IS_VLAN Set to '1' if the frame was/is VLAN
* tagged.
* Bit 25 to 26 - ETHER_ENCAP Reflects the Ethernet encapsulation
* of the received frame.
* 0x0 - Ethernet DIX
* 0x1 - LLC
* 0x2 - SNAP (includes Jumbo-SNAP)
* 0x3 - IPX
* Bit 27 - IS_IPV4 Set to '1' if the frame contains an IPv4 packet.
* Bit 28 - IS_IPV6 Set to '1' if the frame contains an IPv6 packet.
* Bit 29 - IS_IP_FRAG Set to '1' if the frame contains a fragmented
* IP packet.
* Bit 30 - IS_TCP Set to '1' if the frame contains a TCP segment.
* Bit 31 - IS_UDP Set to '1' if the frame contains a UDP message.
* Bit 32 to 47 - L3_Checksum[0:15] The IPv4 checksum value that
* arrived with the frame. If the resulting computed IPv4 header
* checksum for the frame did not produce the expected 0xFFFF value,
* then the transfer code would be set to 0x9.
* Bit 48 to 63 - L4_Checksum[0:15] The TCP/UDP checksum value that
* arrived with the frame. If the resulting computed TCP/UDP checksum
* for the frame did not produce the expected 0xFFFF value, then the
* transfer code would be set to 0xA.
* @control_1:Bits 0 to 1 - Reserved
* Bits 2 to 15 - Buffer0_Size.This field is set by the host and
* eventually overwritten by the adapter. The host writes the
* available buffer size in bytes when it passes the descriptor to
* the adapter. When a frame is delivered the host, the adapter
* populates this field with the number of bytes written into the
* buffer. The largest supported buffer is 16, 383 bytes.
* Bit 16 to 47 - RTH Hash Value 32-bit RTH hash value. Only valid if
* RTH_HASH_TYPE (Control_0, bits 20:23) is nonzero.
* Bit 48 to 63 - VLAN_Tag[0:15] The contents of the variable portion
* of the VLAN tag, if one was detected by the adapter. This field is
* populated even if VLAN-tag stripping is enabled.
* @buffer0_ptr: Pointer to buffer. This field is populated by the driver.
*
* One buffer mode RxD for ring structure
*/
struct vxge_hw_ring_rxd_1 {
u64 host_control;
u64 control_0;
#define VXGE_HW_RING_RXD_RTH_BUCKET_GET(ctrl0) vxge_bVALn(ctrl0, 0, 7)
#define VXGE_HW_RING_RXD_LIST_OWN_ADAPTER vxge_mBIT(7)
#define VXGE_HW_RING_RXD_FAST_PATH_ELIGIBLE_GET(ctrl0) vxge_bVALn(ctrl0, 8, 1)
#define VXGE_HW_RING_RXD_L3_CKSUM_CORRECT_GET(ctrl0) vxge_bVALn(ctrl0, 9, 1)
#define VXGE_HW_RING_RXD_L4_CKSUM_CORRECT_GET(ctrl0) vxge_bVALn(ctrl0, 10, 1)
#define VXGE_HW_RING_RXD_T_CODE_GET(ctrl0) vxge_bVALn(ctrl0, 12, 4)
#define VXGE_HW_RING_RXD_T_CODE(val) vxge_vBIT(val, 12, 4)
#define VXGE_HW_RING_RXD_T_CODE_UNUSED VXGE_HW_RING_T_CODE_UNUSED
#define VXGE_HW_RING_RXD_SYN_GET(ctrl0) vxge_bVALn(ctrl0, 16, 1)
#define VXGE_HW_RING_RXD_IS_ICMP_GET(ctrl0) vxge_bVALn(ctrl0, 17, 1)
#define VXGE_HW_RING_RXD_RTH_SPDM_HIT_GET(ctrl0) vxge_bVALn(ctrl0, 18, 1)
#define VXGE_HW_RING_RXD_RTH_IT_HIT_GET(ctrl0) vxge_bVALn(ctrl0, 19, 1)
#define VXGE_HW_RING_RXD_RTH_HASH_TYPE_GET(ctrl0) vxge_bVALn(ctrl0, 20, 4)
#define VXGE_HW_RING_RXD_IS_VLAN_GET(ctrl0) vxge_bVALn(ctrl0, 24, 1)
#define VXGE_HW_RING_RXD_ETHER_ENCAP_GET(ctrl0) vxge_bVALn(ctrl0, 25, 2)
#define VXGE_HW_RING_RXD_FRAME_PROTO_GET(ctrl0) vxge_bVALn(ctrl0, 27, 5)
#define VXGE_HW_RING_RXD_L3_CKSUM_GET(ctrl0) vxge_bVALn(ctrl0, 32, 16)
#define VXGE_HW_RING_RXD_L4_CKSUM_GET(ctrl0) vxge_bVALn(ctrl0, 48, 16)
u64 control_1;
#define VXGE_HW_RING_RXD_1_BUFFER0_SIZE_GET(ctrl1) vxge_bVALn(ctrl1, 2, 14)
#define VXGE_HW_RING_RXD_1_BUFFER0_SIZE(val) vxge_vBIT(val, 2, 14)
#define VXGE_HW_RING_RXD_1_BUFFER0_SIZE_MASK vxge_vBIT(0x3FFF, 2, 14)
#define VXGE_HW_RING_RXD_1_RTH_HASH_VAL_GET(ctrl1) vxge_bVALn(ctrl1, 16, 32)
#define VXGE_HW_RING_RXD_VLAN_TAG_GET(ctrl1) vxge_bVALn(ctrl1, 48, 16)
u64 buffer0_ptr;
};
enum vxge_hw_rth_algoritms {
RTH_ALG_JENKINS = 0,
RTH_ALG_MS_RSS = 1,
RTH_ALG_CRC32C = 2
};
/**
* struct vxge_hw_rth_hash_types - RTH hash types.
* @hash_type_tcpipv4_en: Enables RTH field type HashTypeTcpIPv4
* @hash_type_ipv4_en: Enables RTH field type HashTypeIPv4
* @hash_type_tcpipv6_en: Enables RTH field type HashTypeTcpIPv6
* @hash_type_ipv6_en: Enables RTH field type HashTypeIPv6
* @hash_type_tcpipv6ex_en: Enables RTH field type HashTypeTcpIPv6Ex
* @hash_type_ipv6ex_en: Enables RTH field type HashTypeIPv6Ex
*
* Used to pass RTH hash types to rts_rts_set.
*
* See also: vxge_hw_vpath_rts_rth_set(), vxge_hw_vpath_rts_rth_get().
*/
struct vxge_hw_rth_hash_types {
u8 hash_type_tcpipv4_en:1,
hash_type_ipv4_en:1,
hash_type_tcpipv6_en:1,
hash_type_ipv6_en:1,
hash_type_tcpipv6ex_en:1,
hash_type_ipv6ex_en:1;
};
void vxge_hw_device_debug_set(
struct __vxge_hw_device *devh,
enum vxge_debug_level level,
u32 mask);
u32
vxge_hw_device_error_level_get(struct __vxge_hw_device *devh);
u32
vxge_hw_device_trace_level_get(struct __vxge_hw_device *devh);
/**
* vxge_hw_ring_rxd_size_get - Get the size of ring descriptor.
* @buf_mode: Buffer mode (1, 3 or 5)
*
* This function returns the size of RxD for given buffer mode
*/
static inline u32 vxge_hw_ring_rxd_size_get(u32 buf_mode)
{
return sizeof(struct vxge_hw_ring_rxd_1);
}
/**
* vxge_hw_ring_rxds_per_block_get - Get the number of rxds per block.
* @buf_mode: Buffer mode (1 buffer mode only)
*
* This function returns the number of RxD for RxD block for given buffer mode
*/
static inline u32 vxge_hw_ring_rxds_per_block_get(u32 buf_mode)
{
return (u32)((VXGE_HW_BLOCK_SIZE-16) /
sizeof(struct vxge_hw_ring_rxd_1));
}
/**
* vxge_hw_ring_rxd_1b_set - Prepare 1-buffer-mode descriptor.
* @rxdh: Descriptor handle.
* @dma_pointer: DMA address of a single receive buffer this descriptor
* should carry. Note that by the time vxge_hw_ring_rxd_1b_set is called,
* the receive buffer should be already mapped to the device
* @size: Size of the receive @dma_pointer buffer.
*
* Prepare 1-buffer-mode Rx descriptor for posting
* (via vxge_hw_ring_rxd_post()).
*
* This inline helper-function does not return any parameters and always
* succeeds.
*
*/
static inline
void vxge_hw_ring_rxd_1b_set(
void *rxdh,
dma_addr_t dma_pointer,
u32 size)
{
struct vxge_hw_ring_rxd_1 *rxdp = (struct vxge_hw_ring_rxd_1 *)rxdh;
rxdp->buffer0_ptr = dma_pointer;
rxdp->control_1 &= ~VXGE_HW_RING_RXD_1_BUFFER0_SIZE_MASK;
rxdp->control_1 |= VXGE_HW_RING_RXD_1_BUFFER0_SIZE(size);
}
/**
* vxge_hw_ring_rxd_1b_get - Get data from the completed 1-buf
* descriptor.
* @vpath_handle: Virtual Path handle.
* @rxdh: Descriptor handle.
* @dma_pointer: DMA address of a single receive buffer this descriptor
* carries. Returned by HW.
* @pkt_length: Length (in bytes) of the data in the buffer pointed by
*
* Retrieve protocol data from the completed 1-buffer-mode Rx descriptor.
* This inline helper-function uses completed descriptor to populate receive
* buffer pointer and other "out" parameters. The function always succeeds.
*
*/
static inline
void vxge_hw_ring_rxd_1b_get(
struct __vxge_hw_ring *ring_handle,
void *rxdh,
u32 *pkt_length)
{
struct vxge_hw_ring_rxd_1 *rxdp = (struct vxge_hw_ring_rxd_1 *)rxdh;
*pkt_length =
(u32)VXGE_HW_RING_RXD_1_BUFFER0_SIZE_GET(rxdp->control_1);
}
/**
* vxge_hw_ring_rxd_1b_info_get - Get extended information associated with
* a completed receive descriptor for 1b mode.
* @vpath_handle: Virtual Path handle.
* @rxdh: Descriptor handle.
* @rxd_info: Descriptor information
*
* Retrieve extended information associated with a completed receive descriptor.
*
*/
static inline
void vxge_hw_ring_rxd_1b_info_get(
struct __vxge_hw_ring *ring_handle,
void *rxdh,
struct vxge_hw_ring_rxd_info *rxd_info)
{
struct vxge_hw_ring_rxd_1 *rxdp = (struct vxge_hw_ring_rxd_1 *)rxdh;
rxd_info->syn_flag =
(u32)VXGE_HW_RING_RXD_SYN_GET(rxdp->control_0);
rxd_info->is_icmp =
(u32)VXGE_HW_RING_RXD_IS_ICMP_GET(rxdp->control_0);
rxd_info->fast_path_eligible =
(u32)VXGE_HW_RING_RXD_FAST_PATH_ELIGIBLE_GET(rxdp->control_0);
rxd_info->l3_cksum_valid =
(u32)VXGE_HW_RING_RXD_L3_CKSUM_CORRECT_GET(rxdp->control_0);
rxd_info->l3_cksum =
(u32)VXGE_HW_RING_RXD_L3_CKSUM_GET(rxdp->control_0);
rxd_info->l4_cksum_valid =
(u32)VXGE_HW_RING_RXD_L4_CKSUM_CORRECT_GET(rxdp->control_0);
rxd_info->l4_cksum =
(u32)VXGE_HW_RING_RXD_L4_CKSUM_GET(rxdp->control_0);
rxd_info->frame =
(u32)VXGE_HW_RING_RXD_ETHER_ENCAP_GET(rxdp->control_0);
rxd_info->proto =
(u32)VXGE_HW_RING_RXD_FRAME_PROTO_GET(rxdp->control_0);
rxd_info->is_vlan =
(u32)VXGE_HW_RING_RXD_IS_VLAN_GET(rxdp->control_0);
rxd_info->vlan =
(u32)VXGE_HW_RING_RXD_VLAN_TAG_GET(rxdp->control_1);
rxd_info->rth_bucket =
(u32)VXGE_HW_RING_RXD_RTH_BUCKET_GET(rxdp->control_0);
rxd_info->rth_it_hit =
(u32)VXGE_HW_RING_RXD_RTH_IT_HIT_GET(rxdp->control_0);
rxd_info->rth_spdm_hit =
(u32)VXGE_HW_RING_RXD_RTH_SPDM_HIT_GET(rxdp->control_0);
rxd_info->rth_hash_type =
(u32)VXGE_HW_RING_RXD_RTH_HASH_TYPE_GET(rxdp->control_0);
rxd_info->rth_value =
(u32)VXGE_HW_RING_RXD_1_RTH_HASH_VAL_GET(rxdp->control_1);
}
/**
* vxge_hw_ring_rxd_private_get - Get driver private per-descriptor data
* of 1b mode 3b mode ring.
* @rxdh: Descriptor handle.
*
* Returns: private driver info associated with the descriptor.
* driver requests per-descriptor space via vxge_hw_ring_attr.
*
*/
static inline void *vxge_hw_ring_rxd_private_get(void *rxdh)
{
struct vxge_hw_ring_rxd_1 *rxdp = (struct vxge_hw_ring_rxd_1 *)rxdh;
return (void *)(size_t)rxdp->host_control;
}
/**
* vxge_hw_fifo_txdl_cksum_set_bits - Offload checksum.
* @txdlh: Descriptor handle.
* @cksum_bits: Specifies which checksums are to be offloaded: IPv4,
* and/or TCP and/or UDP.
*
* Ask Titan to calculate IPv4 & transport checksums for _this_ transmit
* descriptor.
* This API is part of the preparation of the transmit descriptor for posting
* (via vxge_hw_fifo_txdl_post()). The related "preparation" APIs include
* vxge_hw_fifo_txdl_mss_set(), vxge_hw_fifo_txdl_buffer_set_aligned(),
* and vxge_hw_fifo_txdl_buffer_set().
* All these APIs fill in the fields of the fifo descriptor,
* in accordance with the Titan specification.
*
*/
static inline void vxge_hw_fifo_txdl_cksum_set_bits(void *txdlh, u64 cksum_bits)
{
struct vxge_hw_fifo_txd *txdp = (struct vxge_hw_fifo_txd *)txdlh;
txdp->control_1 |= cksum_bits;
}
/**
* vxge_hw_fifo_txdl_mss_set - Set MSS.
* @txdlh: Descriptor handle.
* @mss: MSS size for _this_ TCP connection. Passed by TCP stack down to the
* driver, which in turn inserts the MSS into the @txdlh.
*
* This API is part of the preparation of the transmit descriptor for posting
* (via vxge_hw_fifo_txdl_post()). The related "preparation" APIs include
* vxge_hw_fifo_txdl_buffer_set(), vxge_hw_fifo_txdl_buffer_set_aligned(),
* and vxge_hw_fifo_txdl_cksum_set_bits().
* All these APIs fill in the fields of the fifo descriptor,
* in accordance with the Titan specification.
*
*/
static inline void vxge_hw_fifo_txdl_mss_set(void *txdlh, int mss)
{
struct vxge_hw_fifo_txd *txdp = (struct vxge_hw_fifo_txd *)txdlh;
txdp->control_0 |= VXGE_HW_FIFO_TXD_LSO_EN;
txdp->control_0 |= VXGE_HW_FIFO_TXD_LSO_MSS(mss);
}
/**
* vxge_hw_fifo_txdl_vlan_set - Set VLAN tag.
* @txdlh: Descriptor handle.
* @vlan_tag: 16bit VLAN tag.
*
* Insert VLAN tag into specified transmit descriptor.
* The actual insertion of the tag into outgoing frame is done by the hardware.
*/
static inline void vxge_hw_fifo_txdl_vlan_set(void *txdlh, u16 vlan_tag)
{
struct vxge_hw_fifo_txd *txdp = (struct vxge_hw_fifo_txd *)txdlh;
txdp->control_1 |= VXGE_HW_FIFO_TXD_VLAN_ENABLE;
txdp->control_1 |= VXGE_HW_FIFO_TXD_VLAN_TAG(vlan_tag);
}
/**
* vxge_hw_fifo_txdl_private_get - Retrieve per-descriptor private data.
* @txdlh: Descriptor handle.
*
* Retrieve per-descriptor private data.
* Note that driver requests per-descriptor space via
* struct vxge_hw_fifo_attr passed to
* vxge_hw_vpath_open().
*
* Returns: private driver data associated with the descriptor.
*/
static inline void *vxge_hw_fifo_txdl_private_get(void *txdlh)
{
struct vxge_hw_fifo_txd *txdp = (struct vxge_hw_fifo_txd *)txdlh;
return (void *)(size_t)txdp->host_control;
}
/**
* struct vxge_hw_ring_attr - Ring open "template".
* @callback: Ring completion callback. HW invokes the callback when there
* are new completions on that ring. In many implementations
* the @callback executes in the hw interrupt context.
* @rxd_init: Ring's descriptor-initialize callback.
* See vxge_hw_ring_rxd_init_f{}.
* If not NULL, HW invokes the callback when opening
* the ring.
* @rxd_term: Ring's descriptor-terminate callback. If not NULL,
* HW invokes the callback when closing the corresponding ring.
* See also vxge_hw_ring_rxd_term_f{}.
* @userdata: User-defined "context" of _that_ ring. Passed back to the
* user as one of the @callback, @rxd_init, and @rxd_term arguments.
* @per_rxd_space: If specified (i.e., greater than zero): extra space
* reserved by HW per each receive descriptor.
* Can be used to store
* and retrieve on completion, information specific
* to the driver.
*
* Ring open "template". User fills the structure with ring
* attributes and passes it to vxge_hw_vpath_open().
*/
struct vxge_hw_ring_attr {
enum vxge_hw_status (*callback)(
struct __vxge_hw_ring *ringh,
void *rxdh,
u8 t_code,
void *userdata);
enum vxge_hw_status (*rxd_init)(
void *rxdh,
void *userdata);
void (*rxd_term)(
void *rxdh,
enum vxge_hw_rxd_state state,
void *userdata);
void *userdata;
u32 per_rxd_space;
};
/**
* function vxge_hw_fifo_callback_f - FIFO callback.
* @vpath_handle: Virtual path whose Fifo "containing" 1 or more completed
* descriptors.
* @txdlh: First completed descriptor.
* @txdl_priv: Pointer to per txdl space allocated
* @t_code: Transfer code, as per Titan User Guide.
* Returned by HW.
* @host_control: Opaque 64bit data stored by driver inside the Titan
* descriptor prior to posting the latter on the fifo
* via vxge_hw_fifo_txdl_post(). The @host_control is returned
* as is to the driver with each completed descriptor.
* @userdata: Opaque per-fifo data specified at fifo open
* time, via vxge_hw_vpath_open().
*
* Fifo completion callback (type declaration). A single per-fifo
* callback is specified at fifo open time, via
* vxge_hw_vpath_open(). Typically gets called as part of the processing
* of the Interrupt Service Routine.
*
* Fifo callback gets called by HW if, and only if, there is at least
* one new completion on a given fifo. Upon processing the first @txdlh driver
* is _supposed_ to continue consuming completions using:
* - vxge_hw_fifo_txdl_next_completed()
*
* Note that failure to process new completions in a timely fashion
* leads to VXGE_HW_INF_OUT_OF_DESCRIPTORS condition.
*
* Non-zero @t_code means failure to process transmit descriptor.
*
* In the "transmit" case the failure could happen, for instance, when the
* link is down, in which case Titan completes the descriptor because it
* is not able to send the data out.
*
* For details please refer to Titan User Guide.
*
* See also: vxge_hw_fifo_txdl_next_completed(), vxge_hw_fifo_txdl_term_f{}.
*/
/**
* function vxge_hw_fifo_txdl_term_f - Terminate descriptor callback.
* @txdlh: First completed descriptor.
* @txdl_priv: Pointer to per txdl space allocated
* @state: One of the enum vxge_hw_txdl_state{} enumerated states.
* @userdata: Per-fifo user data (a.k.a. context) specified at
* fifo open time, via vxge_hw_vpath_open().
*
* Terminate descriptor callback. Unless NULL is specified in the
* struct vxge_hw_fifo_attr{} structure passed to vxge_hw_vpath_open()),
* HW invokes the callback as part of closing fifo, prior to
* de-allocating the ring and associated data structures
* (including descriptors).
* driver should utilize the callback to (for instance) unmap
* and free DMA data buffers associated with the posted (state =
* VXGE_HW_TXDL_STATE_POSTED) descriptors,
* as well as other relevant cleanup functions.
*
* See also: struct vxge_hw_fifo_attr{}
*/
/**
* struct vxge_hw_fifo_attr - Fifo open "template".
* @callback: Fifo completion callback. HW invokes the callback when there
* are new completions on that fifo. In many implementations
* the @callback executes in the hw interrupt context.
* @txdl_term: Fifo's descriptor-terminate callback. If not NULL,
* HW invokes the callback when closing the corresponding fifo.
* See also vxge_hw_fifo_txdl_term_f{}.
* @userdata: User-defined "context" of _that_ fifo. Passed back to the
* user as one of the @callback, and @txdl_term arguments.
* @per_txdl_space: If specified (i.e., greater than zero): extra space
* reserved by HW per each transmit descriptor. Can be used to
* store, and retrieve on completion, information specific
* to the driver.
*
* Fifo open "template". User fills the structure with fifo
* attributes and passes it to vxge_hw_vpath_open().
*/
struct vxge_hw_fifo_attr {
enum vxge_hw_status (*callback)(
struct __vxge_hw_fifo *fifo_handle,
void *txdlh,
enum vxge_hw_fifo_tcode t_code,
void *userdata,
struct sk_buff ***skb_ptr,
int nr_skb, int *more);
void (*txdl_term)(
void *txdlh,
enum vxge_hw_txdl_state state,
void *userdata);
void *userdata;
u32 per_txdl_space;
};
/**
* struct vxge_hw_vpath_attr - Attributes of virtual path
* @vp_id: Identifier of Virtual Path
* @ring_attr: Attributes of ring for non-offload receive
* @fifo_attr: Attributes of fifo for non-offload transmit
*
* Attributes of virtual path. This structure is passed as parameter
* to the vxge_hw_vpath_open() routine to set the attributes of ring and fifo.
*/
struct vxge_hw_vpath_attr {
u32 vp_id;
struct vxge_hw_ring_attr ring_attr;
struct vxge_hw_fifo_attr fifo_attr;
};
enum vxge_hw_status vxge_hw_device_hw_info_get(
void __iomem *bar0,
struct vxge_hw_device_hw_info *hw_info);
enum vxge_hw_status vxge_hw_device_config_default_get(
struct vxge_hw_device_config *device_config);
/**
* vxge_hw_device_link_state_get - Get link state.
* @devh: HW device handle.
*
* Get link state.
* Returns: link state.
*/
static inline
enum vxge_hw_device_link_state vxge_hw_device_link_state_get(
struct __vxge_hw_device *devh)
{
return devh->link_state;
}
void vxge_hw_device_terminate(struct __vxge_hw_device *devh);
const u8 *
vxge_hw_device_serial_number_get(struct __vxge_hw_device *devh);
u16 vxge_hw_device_link_width_get(struct __vxge_hw_device *devh);
const u8 *
vxge_hw_device_product_name_get(struct __vxge_hw_device *devh);
enum vxge_hw_status vxge_hw_device_initialize(
struct __vxge_hw_device **devh,
struct vxge_hw_device_attr *attr,
struct vxge_hw_device_config *device_config);
enum vxge_hw_status vxge_hw_device_getpause_data(
struct __vxge_hw_device *devh,
u32 port,
u32 *tx,
u32 *rx);
enum vxge_hw_status vxge_hw_device_setpause_data(
struct __vxge_hw_device *devh,
u32 port,
u32 tx,
u32 rx);
static inline void *vxge_os_dma_malloc(struct pci_dev *pdev,
unsigned long size,
struct pci_dev **p_dmah,
struct pci_dev **p_dma_acch)
{
void *vaddr;
unsigned long misaligned = 0;
int realloc_flag = 0;
*p_dma_acch = *p_dmah = NULL;
realloc:
vaddr = kmalloc(size, GFP_KERNEL | GFP_DMA);
if (vaddr == NULL)
return vaddr;
misaligned = (unsigned long)VXGE_ALIGN((unsigned long)vaddr,
VXGE_CACHE_LINE_SIZE);
if (realloc_flag)
goto out;
if (misaligned) {
/* misaligned, free current one and try allocating
* size + VXGE_CACHE_LINE_SIZE memory
*/
kfree(vaddr);
size += VXGE_CACHE_LINE_SIZE;
realloc_flag = 1;
goto realloc;
}
out:
*(unsigned long *)p_dma_acch = misaligned;
vaddr = (void *)((u8 *)vaddr + misaligned);
return vaddr;
}
static inline void vxge_os_dma_free(struct pci_dev *pdev, const void *vaddr,
struct pci_dev **p_dma_acch)
{
unsigned long misaligned = *(unsigned long *)p_dma_acch;
u8 *tmp = (u8 *)vaddr;
tmp -= misaligned;
kfree((void *)tmp);
}
/*
* __vxge_hw_mempool_item_priv - will return pointer on per item private space
*/
static inline void*
__vxge_hw_mempool_item_priv(
struct vxge_hw_mempool *mempool,
u32 memblock_idx,
void *item,
u32 *memblock_item_idx)
{
ptrdiff_t offset;
void *memblock = mempool->memblocks_arr[memblock_idx];
offset = (u32)((u8 *)item - (u8 *)memblock);
vxge_assert(offset >= 0 && (u32)offset < mempool->memblock_size);
(*memblock_item_idx) = (u32) offset / mempool->item_size;
vxge_assert((*memblock_item_idx) < mempool->items_per_memblock);
return (u8 *)mempool->memblocks_priv_arr[memblock_idx] +
(*memblock_item_idx) * mempool->items_priv_size;
}
/*
* __vxge_hw_fifo_txdl_priv - Return the max fragments allocated
* for the fifo.
* @fifo: Fifo
* @txdp: Poniter to a TxD
*/
static inline struct __vxge_hw_fifo_txdl_priv *
__vxge_hw_fifo_txdl_priv(
struct __vxge_hw_fifo *fifo,
struct vxge_hw_fifo_txd *txdp)
{
return (struct __vxge_hw_fifo_txdl_priv *)
(((char *)((ulong)txdp->host_control)) +
fifo->per_txdl_space);
}
enum vxge_hw_status vxge_hw_vpath_open(
struct __vxge_hw_device *devh,
struct vxge_hw_vpath_attr *attr,
struct __vxge_hw_vpath_handle **vpath_handle);
enum vxge_hw_status vxge_hw_vpath_close(
struct __vxge_hw_vpath_handle *vpath_handle);
enum vxge_hw_status
vxge_hw_vpath_reset(
struct __vxge_hw_vpath_handle *vpath_handle);
enum vxge_hw_status
vxge_hw_vpath_recover_from_reset(
struct __vxge_hw_vpath_handle *vpath_handle);
void
vxge_hw_vpath_enable(struct __vxge_hw_vpath_handle *vp);
enum vxge_hw_status
vxge_hw_vpath_check_leak(struct __vxge_hw_ring *ringh);
enum vxge_hw_status vxge_hw_vpath_mtu_set(
struct __vxge_hw_vpath_handle *vpath_handle,
u32 new_mtu);
void
vxge_hw_vpath_rx_doorbell_init(struct __vxge_hw_vpath_handle *vp);
static inline void __vxge_hw_pio_mem_write32_upper(u32 val, void __iomem *addr)
{
writel(val, addr + 4);
}
static inline void __vxge_hw_pio_mem_write32_lower(u32 val, void __iomem *addr)
{
writel(val, addr);
}
enum vxge_hw_status
vxge_hw_device_flick_link_led(struct __vxge_hw_device *devh, u64 on_off);
enum vxge_hw_status
vxge_hw_vpath_strip_fcs_check(struct __vxge_hw_device *hldev, u64 vpath_mask);
/**
* vxge_debug_ll
* @level: level of debug verbosity.
* @mask: mask for the debug
* @buf: Circular buffer for tracing
* @fmt: printf like format string
*
* Provides logging facilities. Can be customized on per-module
* basis or/and with debug levels. Input parameters, except
* module and level, are the same as posix printf. This function
* may be compiled out if DEBUG macro was never defined.
* See also: enum vxge_debug_level{}.
*/
#if (VXGE_COMPONENT_LL & VXGE_DEBUG_MODULE_MASK)
#define vxge_debug_ll(level, mask, fmt, ...) do { \
if ((level >= VXGE_ERR && VXGE_COMPONENT_LL & VXGE_DEBUG_ERR_MASK) || \
(level >= VXGE_TRACE && VXGE_COMPONENT_LL & VXGE_DEBUG_TRACE_MASK))\
if ((mask & VXGE_DEBUG_MASK) == mask) \
printk(fmt "\n", ##__VA_ARGS__); \
} while (0)
#else
#define vxge_debug_ll(level, mask, fmt, ...)
#endif
enum vxge_hw_status vxge_hw_vpath_rts_rth_itable_set(
struct __vxge_hw_vpath_handle **vpath_handles,
u32 vpath_count,
u8 *mtable,
u8 *itable,
u32 itable_size);
enum vxge_hw_status vxge_hw_vpath_rts_rth_set(
struct __vxge_hw_vpath_handle *vpath_handle,
enum vxge_hw_rth_algoritms algorithm,
struct vxge_hw_rth_hash_types *hash_type,
u16 bucket_size);
enum vxge_hw_status
__vxge_hw_device_is_privilaged(u32 host_type, u32 func_id);
#define VXGE_HW_MIN_SUCCESSIVE_IDLE_COUNT 5
#define VXGE_HW_MAX_POLLING_COUNT 100
void
vxge_hw_device_wait_receive_idle(struct __vxge_hw_device *hldev);
enum vxge_hw_status
vxge_hw_upgrade_read_version(struct __vxge_hw_device *hldev, u32 *major,
u32 *minor, u32 *build);
enum vxge_hw_status vxge_hw_flash_fw(struct __vxge_hw_device *hldev);
enum vxge_hw_status
vxge_update_fw_image(struct __vxge_hw_device *hldev, const u8 *filebuf,
int size);
enum vxge_hw_status
vxge_hw_vpath_eprom_img_ver_get(struct __vxge_hw_device *hldev,
struct eprom_image *eprom_image_data);
int vxge_hw_vpath_wait_receive_idle(struct __vxge_hw_device *hldev, u32 vp_id);
#endif
/******************************************************************************
* This software may be used and distributed according to the terms of
* the GNU General Public License (GPL), incorporated herein by reference.
* Drivers based on or derived from this code fall under the GPL and must
* retain the authorship, copyright and license notice. This file is not
* a complete program and may only be used when the entire operating
* system is licensed under the GPL.
* See the file COPYING in this distribution for more information.
*
* vxge-ethtool.c: Driver for Exar Corp's X3100 Series 10GbE PCIe I/O
* Virtualized Server Adapter.
* Copyright(c) 2002-2010 Exar Corp.
******************************************************************************/
#include <linux/ethtool.h>
#include <linux/slab.h>
#include <linux/pci.h>
#include <linux/etherdevice.h>
#include "vxge-ethtool.h"
static const char ethtool_driver_stats_keys[][ETH_GSTRING_LEN] = {
{"\n DRIVER STATISTICS"},
{"vpaths_opened"},
{"vpath_open_fail_cnt"},
{"link_up_cnt"},
{"link_down_cnt"},
{"tx_frms"},
{"tx_errors"},
{"tx_bytes"},
{"txd_not_free"},
{"txd_out_of_desc"},
{"rx_frms"},
{"rx_errors"},
{"rx_bytes"},
{"rx_mcast"},
{"pci_map_fail_cnt"},
{"skb_alloc_fail_cnt"}
};
/**
* vxge_ethtool_set_link_ksettings - Sets different link parameters.
* @dev: device pointer.
* @cmd: pointer to the structure with parameters given by ethtool to set
* link information.
*
* The function sets different link parameters provided by the user onto
* the NIC.
* Return value:
* 0 on success.
*/
static int
vxge_ethtool_set_link_ksettings(struct net_device *dev,
const struct ethtool_link_ksettings *cmd)
{
/* We currently only support 10Gb/FULL */
if ((cmd->base.autoneg == AUTONEG_ENABLE) ||
(cmd->base.speed != SPEED_10000) ||
(cmd->base.duplex != DUPLEX_FULL))
return -EINVAL;
return 0;
}
/**
* vxge_ethtool_get_link_ksettings - Return link specific information.
* @dev: device pointer.
* @cmd: pointer to the structure with parameters given by ethtool
* to return link information.
*
* Returns link specific information like speed, duplex etc.. to ethtool.
* Return value :
* return 0 on success.
*/
static int vxge_ethtool_get_link_ksettings(struct net_device *dev,
struct ethtool_link_ksettings *cmd)
{
ethtool_link_ksettings_zero_link_mode(cmd, supported);
ethtool_link_ksettings_add_link_mode(cmd, supported, 10000baseT_Full);
ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE);
ethtool_link_ksettings_zero_link_mode(cmd, advertising);
ethtool_link_ksettings_add_link_mode(cmd, advertising, 10000baseT_Full);
ethtool_link_ksettings_add_link_mode(cmd, advertising, FIBRE);
cmd->base.port = PORT_FIBRE;
if (netif_carrier_ok(dev)) {
cmd->base.speed = SPEED_10000;
cmd->base.duplex = DUPLEX_FULL;
} else {
cmd->base.speed = SPEED_UNKNOWN;
cmd->base.duplex = DUPLEX_UNKNOWN;
}
cmd->base.autoneg = AUTONEG_DISABLE;
return 0;
}
/**
* vxge_ethtool_gdrvinfo - Returns driver specific information.
* @dev: device pointer.
* @info: pointer to the structure with parameters given by ethtool to
* return driver information.
*
* Returns driver specefic information like name, version etc.. to ethtool.
*/
static void vxge_ethtool_gdrvinfo(struct net_device *dev,
struct ethtool_drvinfo *info)
{
struct vxgedev *vdev = netdev_priv(dev);
strlcpy(info->driver, VXGE_DRIVER_NAME, sizeof(info->driver));
strlcpy(info->version, DRV_VERSION, sizeof(info->version));
strlcpy(info->fw_version, vdev->fw_version, sizeof(info->fw_version));
strlcpy(info->bus_info, pci_name(vdev->pdev), sizeof(info->bus_info));
}
/**
* vxge_ethtool_gregs - dumps the entire space of Titan into the buffer.
* @dev: device pointer.
* @regs: pointer to the structure with parameters given by ethtool for
* dumping the registers.
* @space: The input argument into which all the registers are dumped.
*
* Dumps the vpath register space of Titan NIC into the user given
* buffer area.
*/
static void vxge_ethtool_gregs(struct net_device *dev,
struct ethtool_regs *regs, void *space)
{
int index, offset;
enum vxge_hw_status status;
u64 reg;
u64 *reg_space = (u64 *)space;
struct vxgedev *vdev = netdev_priv(dev);
struct __vxge_hw_device *hldev = vdev->devh;
regs->len = sizeof(struct vxge_hw_vpath_reg) * vdev->no_of_vpath;
regs->version = vdev->pdev->subsystem_device;
for (index = 0; index < vdev->no_of_vpath; index++) {
for (offset = 0; offset < sizeof(struct vxge_hw_vpath_reg);
offset += 8) {
status = vxge_hw_mgmt_reg_read(hldev,
vxge_hw_mgmt_reg_type_vpath,
vdev->vpaths[index].device_id,
offset, &reg);
if (status != VXGE_HW_OK) {
vxge_debug_init(VXGE_ERR,
"%s:%d Getting reg dump Failed",
__func__, __LINE__);
return;
}
*reg_space++ = reg;
}
}
}
/**
* vxge_ethtool_idnic - To physically identify the nic on the system.
* @dev : device pointer.
* @state : requested LED state
*
* Used to physically identify the NIC on the system.
* 0 on success
*/
static int vxge_ethtool_idnic(struct net_device *dev,
enum ethtool_phys_id_state state)
{
struct vxgedev *vdev = netdev_priv(dev);
struct __vxge_hw_device *hldev = vdev->devh;
switch (state) {
case ETHTOOL_ID_ACTIVE:
vxge_hw_device_flick_link_led(hldev, VXGE_FLICKER_ON);
break;
case ETHTOOL_ID_INACTIVE:
vxge_hw_device_flick_link_led(hldev, VXGE_FLICKER_OFF);
break;
default:
return -EINVAL;
}
return 0;
}
/**
* vxge_ethtool_getpause_data - Pause frame generation and reception.
* @dev : device pointer.
* @ep : pointer to the structure with pause parameters given by ethtool.
* Description:
* Returns the Pause frame generation and reception capability of the NIC.
* Return value:
* void
*/
static void vxge_ethtool_getpause_data(struct net_device *dev,
struct ethtool_pauseparam *ep)
{
struct vxgedev *vdev = netdev_priv(dev);
struct __vxge_hw_device *hldev = vdev->devh;
vxge_hw_device_getpause_data(hldev, 0, &ep->tx_pause, &ep->rx_pause);
}
/**
* vxge_ethtool_setpause_data - set/reset pause frame generation.
* @dev : device pointer.
* @ep : pointer to the structure with pause parameters given by ethtool.
* Description:
* It can be used to set or reset Pause frame generation or reception
* support of the NIC.
* Return value:
* int, returns 0 on Success
*/
static int vxge_ethtool_setpause_data(struct net_device *dev,
struct ethtool_pauseparam *ep)
{
struct vxgedev *vdev = netdev_priv(dev);
struct __vxge_hw_device *hldev = vdev->devh;
vxge_hw_device_setpause_data(hldev, 0, ep->tx_pause, ep->rx_pause);
vdev->config.tx_pause_enable = ep->tx_pause;
vdev->config.rx_pause_enable = ep->rx_pause;
return 0;
}
static void vxge_get_ethtool_stats(struct net_device *dev,
struct ethtool_stats *estats, u64 *tmp_stats)
{
int j, k;
enum vxge_hw_status status;
enum vxge_hw_status swstatus;
struct vxge_vpath *vpath = NULL;
struct vxgedev *vdev = netdev_priv(dev);
struct __vxge_hw_device *hldev = vdev->devh;
struct vxge_hw_xmac_stats *xmac_stats;
struct vxge_hw_device_stats_sw_info *sw_stats;
struct vxge_hw_device_stats_hw_info *hw_stats;
u64 *ptr = tmp_stats;
memset(tmp_stats, 0,
vxge_ethtool_get_sset_count(dev, ETH_SS_STATS) * sizeof(u64));
xmac_stats = kzalloc(sizeof(struct vxge_hw_xmac_stats), GFP_KERNEL);
if (xmac_stats == NULL) {
vxge_debug_init(VXGE_ERR,
"%s : %d Memory Allocation failed for xmac_stats",
__func__, __LINE__);
return;
}
sw_stats = kzalloc(sizeof(struct vxge_hw_device_stats_sw_info),
GFP_KERNEL);
if (sw_stats == NULL) {
kfree(xmac_stats);
vxge_debug_init(VXGE_ERR,
"%s : %d Memory Allocation failed for sw_stats",
__func__, __LINE__);
return;
}
hw_stats = kzalloc(sizeof(struct vxge_hw_device_stats_hw_info),
GFP_KERNEL);
if (hw_stats == NULL) {
kfree(xmac_stats);
kfree(sw_stats);
vxge_debug_init(VXGE_ERR,
"%s : %d Memory Allocation failed for hw_stats",
__func__, __LINE__);
return;
}
*ptr++ = 0;
status = vxge_hw_device_xmac_stats_get(hldev, xmac_stats);
if (status != VXGE_HW_OK) {
if (status != VXGE_HW_ERR_PRIVILEGED_OPERATION) {
vxge_debug_init(VXGE_ERR,
"%s : %d Failure in getting xmac stats",
__func__, __LINE__);
}
}
swstatus = vxge_hw_driver_stats_get(hldev, sw_stats);
if (swstatus != VXGE_HW_OK) {
vxge_debug_init(VXGE_ERR,
"%s : %d Failure in getting sw stats",
__func__, __LINE__);
}
status = vxge_hw_device_stats_get(hldev, hw_stats);
if (status != VXGE_HW_OK) {
vxge_debug_init(VXGE_ERR,
"%s : %d hw_stats_get error", __func__, __LINE__);
}
for (k = 0; k < vdev->no_of_vpath; k++) {
struct vxge_hw_vpath_stats_hw_info *vpath_info;
vpath = &vdev->vpaths[k];
j = vpath->device_id;
vpath_info = hw_stats->vpath_info[j];
if (!vpath_info) {
memset(ptr, 0, (VXGE_HW_VPATH_TX_STATS_LEN +
VXGE_HW_VPATH_RX_STATS_LEN) * sizeof(u64));
ptr += (VXGE_HW_VPATH_TX_STATS_LEN +
VXGE_HW_VPATH_RX_STATS_LEN);
continue;
}
*ptr++ = vpath_info->tx_stats.tx_ttl_eth_frms;
*ptr++ = vpath_info->tx_stats.tx_ttl_eth_octets;
*ptr++ = vpath_info->tx_stats.tx_data_octets;
*ptr++ = vpath_info->tx_stats.tx_mcast_frms;
*ptr++ = vpath_info->tx_stats.tx_bcast_frms;
*ptr++ = vpath_info->tx_stats.tx_ucast_frms;
*ptr++ = vpath_info->tx_stats.tx_tagged_frms;
*ptr++ = vpath_info->tx_stats.tx_vld_ip;
*ptr++ = vpath_info->tx_stats.tx_vld_ip_octets;
*ptr++ = vpath_info->tx_stats.tx_icmp;
*ptr++ = vpath_info->tx_stats.tx_tcp;
*ptr++ = vpath_info->tx_stats.tx_rst_tcp;
*ptr++ = vpath_info->tx_stats.tx_udp;
*ptr++ = vpath_info->tx_stats.tx_unknown_protocol;
*ptr++ = vpath_info->tx_stats.tx_lost_ip;
*ptr++ = vpath_info->tx_stats.tx_parse_error;
*ptr++ = vpath_info->tx_stats.tx_tcp_offload;
*ptr++ = vpath_info->tx_stats.tx_retx_tcp_offload;
*ptr++ = vpath_info->tx_stats.tx_lost_ip_offload;
*ptr++ = vpath_info->rx_stats.rx_ttl_eth_frms;
*ptr++ = vpath_info->rx_stats.rx_vld_frms;
*ptr++ = vpath_info->rx_stats.rx_offload_frms;
*ptr++ = vpath_info->rx_stats.rx_ttl_eth_octets;
*ptr++ = vpath_info->rx_stats.rx_data_octets;
*ptr++ = vpath_info->rx_stats.rx_offload_octets;
*ptr++ = vpath_info->rx_stats.rx_vld_mcast_frms;
*ptr++ = vpath_info->rx_stats.rx_vld_bcast_frms;
*ptr++ = vpath_info->rx_stats.rx_accepted_ucast_frms;
*ptr++ = vpath_info->rx_stats.rx_accepted_nucast_frms;
*ptr++ = vpath_info->rx_stats.rx_tagged_frms;
*ptr++ = vpath_info->rx_stats.rx_long_frms;
*ptr++ = vpath_info->rx_stats.rx_usized_frms;
*ptr++ = vpath_info->rx_stats.rx_osized_frms;
*ptr++ = vpath_info->rx_stats.rx_frag_frms;
*ptr++ = vpath_info->rx_stats.rx_jabber_frms;
*ptr++ = vpath_info->rx_stats.rx_ttl_64_frms;
*ptr++ = vpath_info->rx_stats.rx_ttl_65_127_frms;
*ptr++ = vpath_info->rx_stats.rx_ttl_128_255_frms;
*ptr++ = vpath_info->rx_stats.rx_ttl_256_511_frms;
*ptr++ = vpath_info->rx_stats.rx_ttl_512_1023_frms;
*ptr++ = vpath_info->rx_stats.rx_ttl_1024_1518_frms;
*ptr++ = vpath_info->rx_stats.rx_ttl_1519_4095_frms;
*ptr++ = vpath_info->rx_stats.rx_ttl_4096_8191_frms;
*ptr++ = vpath_info->rx_stats.rx_ttl_8192_max_frms;
*ptr++ = vpath_info->rx_stats.rx_ttl_gt_max_frms;
*ptr++ = vpath_info->rx_stats.rx_ip;
*ptr++ = vpath_info->rx_stats.rx_accepted_ip;
*ptr++ = vpath_info->rx_stats.rx_ip_octets;
*ptr++ = vpath_info->rx_stats.rx_err_ip;
*ptr++ = vpath_info->rx_stats.rx_icmp;
*ptr++ = vpath_info->rx_stats.rx_tcp;
*ptr++ = vpath_info->rx_stats.rx_udp;
*ptr++ = vpath_info->rx_stats.rx_err_tcp;
*ptr++ = vpath_info->rx_stats.rx_lost_frms;
*ptr++ = vpath_info->rx_stats.rx_lost_ip;
*ptr++ = vpath_info->rx_stats.rx_lost_ip_offload;
*ptr++ = vpath_info->rx_stats.rx_various_discard;
*ptr++ = vpath_info->rx_stats.rx_sleep_discard;
*ptr++ = vpath_info->rx_stats.rx_red_discard;
*ptr++ = vpath_info->rx_stats.rx_queue_full_discard;
*ptr++ = vpath_info->rx_stats.rx_mpa_ok_frms;
}
*ptr++ = 0;
for (k = 0; k < vdev->max_config_port; k++) {
*ptr++ = xmac_stats->aggr_stats[k].tx_frms;
*ptr++ = xmac_stats->aggr_stats[k].tx_data_octets;
*ptr++ = xmac_stats->aggr_stats[k].tx_mcast_frms;
*ptr++ = xmac_stats->aggr_stats[k].tx_bcast_frms;
*ptr++ = xmac_stats->aggr_stats[k].tx_discarded_frms;
*ptr++ = xmac_stats->aggr_stats[k].tx_errored_frms;
*ptr++ = xmac_stats->aggr_stats[k].rx_frms;
*ptr++ = xmac_stats->aggr_stats[k].rx_data_octets;
*ptr++ = xmac_stats->aggr_stats[k].rx_mcast_frms;
*ptr++ = xmac_stats->aggr_stats[k].rx_bcast_frms;
*ptr++ = xmac_stats->aggr_stats[k].rx_discarded_frms;
*ptr++ = xmac_stats->aggr_stats[k].rx_errored_frms;
*ptr++ = xmac_stats->aggr_stats[k].rx_unknown_slow_proto_frms;
}
*ptr++ = 0;
for (k = 0; k < vdev->max_config_port; k++) {
*ptr++ = xmac_stats->port_stats[k].tx_ttl_frms;
*ptr++ = xmac_stats->port_stats[k].tx_ttl_octets;
*ptr++ = xmac_stats->port_stats[k].tx_data_octets;
*ptr++ = xmac_stats->port_stats[k].tx_mcast_frms;
*ptr++ = xmac_stats->port_stats[k].tx_bcast_frms;
*ptr++ = xmac_stats->port_stats[k].tx_ucast_frms;
*ptr++ = xmac_stats->port_stats[k].tx_tagged_frms;
*ptr++ = xmac_stats->port_stats[k].tx_vld_ip;
*ptr++ = xmac_stats->port_stats[k].tx_vld_ip_octets;
*ptr++ = xmac_stats->port_stats[k].tx_icmp;
*ptr++ = xmac_stats->port_stats[k].tx_tcp;
*ptr++ = xmac_stats->port_stats[k].tx_rst_tcp;
*ptr++ = xmac_stats->port_stats[k].tx_udp;
*ptr++ = xmac_stats->port_stats[k].tx_parse_error;
*ptr++ = xmac_stats->port_stats[k].tx_unknown_protocol;
*ptr++ = xmac_stats->port_stats[k].tx_pause_ctrl_frms;
*ptr++ = xmac_stats->port_stats[k].tx_marker_pdu_frms;
*ptr++ = xmac_stats->port_stats[k].tx_lacpdu_frms;
*ptr++ = xmac_stats->port_stats[k].tx_drop_ip;
*ptr++ = xmac_stats->port_stats[k].tx_marker_resp_pdu_frms;
*ptr++ = xmac_stats->port_stats[k].tx_xgmii_char2_match;
*ptr++ = xmac_stats->port_stats[k].tx_xgmii_char1_match;
*ptr++ = xmac_stats->port_stats[k].tx_xgmii_column2_match;
*ptr++ = xmac_stats->port_stats[k].tx_xgmii_column1_match;
*ptr++ = xmac_stats->port_stats[k].tx_any_err_frms;
*ptr++ = xmac_stats->port_stats[k].tx_drop_frms;
*ptr++ = xmac_stats->port_stats[k].rx_ttl_frms;
*ptr++ = xmac_stats->port_stats[k].rx_vld_frms;
*ptr++ = xmac_stats->port_stats[k].rx_offload_frms;
*ptr++ = xmac_stats->port_stats[k].rx_ttl_octets;
*ptr++ = xmac_stats->port_stats[k].rx_data_octets;
*ptr++ = xmac_stats->port_stats[k].rx_offload_octets;
*ptr++ = xmac_stats->port_stats[k].rx_vld_mcast_frms;
*ptr++ = xmac_stats->port_stats[k].rx_vld_bcast_frms;
*ptr++ = xmac_stats->port_stats[k].rx_accepted_ucast_frms;
*ptr++ = xmac_stats->port_stats[k].rx_accepted_nucast_frms;
*ptr++ = xmac_stats->port_stats[k].rx_tagged_frms;
*ptr++ = xmac_stats->port_stats[k].rx_long_frms;
*ptr++ = xmac_stats->port_stats[k].rx_usized_frms;
*ptr++ = xmac_stats->port_stats[k].rx_osized_frms;
*ptr++ = xmac_stats->port_stats[k].rx_frag_frms;
*ptr++ = xmac_stats->port_stats[k].rx_jabber_frms;
*ptr++ = xmac_stats->port_stats[k].rx_ttl_64_frms;
*ptr++ = xmac_stats->port_stats[k].rx_ttl_65_127_frms;
*ptr++ = xmac_stats->port_stats[k].rx_ttl_128_255_frms;
*ptr++ = xmac_stats->port_stats[k].rx_ttl_256_511_frms;
*ptr++ = xmac_stats->port_stats[k].rx_ttl_512_1023_frms;
*ptr++ = xmac_stats->port_stats[k].rx_ttl_1024_1518_frms;
*ptr++ = xmac_stats->port_stats[k].rx_ttl_1519_4095_frms;
*ptr++ = xmac_stats->port_stats[k].rx_ttl_4096_8191_frms;
*ptr++ = xmac_stats->port_stats[k].rx_ttl_8192_max_frms;
*ptr++ = xmac_stats->port_stats[k].rx_ttl_gt_max_frms;
*ptr++ = xmac_stats->port_stats[k].rx_ip;
*ptr++ = xmac_stats->port_stats[k].rx_accepted_ip;
*ptr++ = xmac_stats->port_stats[k].rx_ip_octets;
*ptr++ = xmac_stats->port_stats[k].rx_err_ip;
*ptr++ = xmac_stats->port_stats[k].rx_icmp;
*ptr++ = xmac_stats->port_stats[k].rx_tcp;
*ptr++ = xmac_stats->port_stats[k].rx_udp;
*ptr++ = xmac_stats->port_stats[k].rx_err_tcp;
*ptr++ = xmac_stats->port_stats[k].rx_pause_count;
*ptr++ = xmac_stats->port_stats[k].rx_pause_ctrl_frms;
*ptr++ = xmac_stats->port_stats[k].rx_unsup_ctrl_frms;
*ptr++ = xmac_stats->port_stats[k].rx_fcs_err_frms;
*ptr++ = xmac_stats->port_stats[k].rx_in_rng_len_err_frms;
*ptr++ = xmac_stats->port_stats[k].rx_out_rng_len_err_frms;
*ptr++ = xmac_stats->port_stats[k].rx_drop_frms;
*ptr++ = xmac_stats->port_stats[k].rx_discarded_frms;
*ptr++ = xmac_stats->port_stats[k].rx_drop_ip;
*ptr++ = xmac_stats->port_stats[k].rx_drop_udp;
*ptr++ = xmac_stats->port_stats[k].rx_marker_pdu_frms;
*ptr++ = xmac_stats->port_stats[k].rx_lacpdu_frms;
*ptr++ = xmac_stats->port_stats[k].rx_unknown_pdu_frms;
*ptr++ = xmac_stats->port_stats[k].rx_marker_resp_pdu_frms;
*ptr++ = xmac_stats->port_stats[k].rx_fcs_discard;
*ptr++ = xmac_stats->port_stats[k].rx_illegal_pdu_frms;
*ptr++ = xmac_stats->port_stats[k].rx_switch_discard;
*ptr++ = xmac_stats->port_stats[k].rx_len_discard;
*ptr++ = xmac_stats->port_stats[k].rx_rpa_discard;
*ptr++ = xmac_stats->port_stats[k].rx_l2_mgmt_discard;
*ptr++ = xmac_stats->port_stats[k].rx_rts_discard;
*ptr++ = xmac_stats->port_stats[k].rx_trash_discard;
*ptr++ = xmac_stats->port_stats[k].rx_buff_full_discard;
*ptr++ = xmac_stats->port_stats[k].rx_red_discard;
*ptr++ = xmac_stats->port_stats[k].rx_xgmii_ctrl_err_cnt;
*ptr++ = xmac_stats->port_stats[k].rx_xgmii_data_err_cnt;
*ptr++ = xmac_stats->port_stats[k].rx_xgmii_char1_match;
*ptr++ = xmac_stats->port_stats[k].rx_xgmii_err_sym;
*ptr++ = xmac_stats->port_stats[k].rx_xgmii_column1_match;
*ptr++ = xmac_stats->port_stats[k].rx_xgmii_char2_match;
*ptr++ = xmac_stats->port_stats[k].rx_local_fault;
*ptr++ = xmac_stats->port_stats[k].rx_xgmii_column2_match;
*ptr++ = xmac_stats->port_stats[k].rx_jettison;
*ptr++ = xmac_stats->port_stats[k].rx_remote_fault;
}
*ptr++ = 0;
for (k = 0; k < vdev->no_of_vpath; k++) {
struct vxge_hw_vpath_stats_sw_info *vpath_info;
vpath = &vdev->vpaths[k];
j = vpath->device_id;
vpath_info = (struct vxge_hw_vpath_stats_sw_info *)
&sw_stats->vpath_info[j];
*ptr++ = vpath_info->soft_reset_cnt;
*ptr++ = vpath_info->error_stats.unknown_alarms;
*ptr++ = vpath_info->error_stats.network_sustained_fault;
*ptr++ = vpath_info->error_stats.network_sustained_ok;
*ptr++ = vpath_info->error_stats.kdfcctl_fifo0_overwrite;
*ptr++ = vpath_info->error_stats.kdfcctl_fifo0_poison;
*ptr++ = vpath_info->error_stats.kdfcctl_fifo0_dma_error;
*ptr++ = vpath_info->error_stats.dblgen_fifo0_overflow;
*ptr++ = vpath_info->error_stats.statsb_pif_chain_error;
*ptr++ = vpath_info->error_stats.statsb_drop_timeout;
*ptr++ = vpath_info->error_stats.target_illegal_access;
*ptr++ = vpath_info->error_stats.ini_serr_det;
*ptr++ = vpath_info->error_stats.prc_ring_bumps;
*ptr++ = vpath_info->error_stats.prc_rxdcm_sc_err;
*ptr++ = vpath_info->error_stats.prc_rxdcm_sc_abort;
*ptr++ = vpath_info->error_stats.prc_quanta_size_err;
*ptr++ = vpath_info->ring_stats.common_stats.full_cnt;
*ptr++ = vpath_info->ring_stats.common_stats.usage_cnt;
*ptr++ = vpath_info->ring_stats.common_stats.usage_max;
*ptr++ = vpath_info->ring_stats.common_stats.
reserve_free_swaps_cnt;
*ptr++ = vpath_info->ring_stats.common_stats.total_compl_cnt;
for (j = 0; j < VXGE_HW_DTR_MAX_T_CODE; j++)
*ptr++ = vpath_info->ring_stats.rxd_t_code_err_cnt[j];
*ptr++ = vpath_info->fifo_stats.common_stats.full_cnt;
*ptr++ = vpath_info->fifo_stats.common_stats.usage_cnt;
*ptr++ = vpath_info->fifo_stats.common_stats.usage_max;
*ptr++ = vpath_info->fifo_stats.common_stats.
reserve_free_swaps_cnt;
*ptr++ = vpath_info->fifo_stats.common_stats.total_compl_cnt;
*ptr++ = vpath_info->fifo_stats.total_posts;
*ptr++ = vpath_info->fifo_stats.total_buffers;
for (j = 0; j < VXGE_HW_DTR_MAX_T_CODE; j++)
*ptr++ = vpath_info->fifo_stats.txd_t_code_err_cnt[j];
}
*ptr++ = 0;
for (k = 0; k < vdev->no_of_vpath; k++) {
struct vxge_hw_vpath_stats_hw_info *vpath_info;
vpath = &vdev->vpaths[k];
j = vpath->device_id;
vpath_info = hw_stats->vpath_info[j];
if (!vpath_info) {
memset(ptr, 0, VXGE_HW_VPATH_STATS_LEN * sizeof(u64));
ptr += VXGE_HW_VPATH_STATS_LEN;
continue;
}
*ptr++ = vpath_info->ini_num_mwr_sent;
*ptr++ = vpath_info->ini_num_mrd_sent;
*ptr++ = vpath_info->ini_num_cpl_rcvd;
*ptr++ = vpath_info->ini_num_mwr_byte_sent;
*ptr++ = vpath_info->ini_num_cpl_byte_rcvd;
*ptr++ = vpath_info->wrcrdtarb_xoff;
*ptr++ = vpath_info->rdcrdtarb_xoff;
*ptr++ = vpath_info->vpath_genstats_count0;
*ptr++ = vpath_info->vpath_genstats_count1;
*ptr++ = vpath_info->vpath_genstats_count2;
*ptr++ = vpath_info->vpath_genstats_count3;
*ptr++ = vpath_info->vpath_genstats_count4;
*ptr++ = vpath_info->vpath_genstats_count5;
*ptr++ = vpath_info->prog_event_vnum0;
*ptr++ = vpath_info->prog_event_vnum1;
*ptr++ = vpath_info->prog_event_vnum2;
*ptr++ = vpath_info->prog_event_vnum3;
*ptr++ = vpath_info->rx_multi_cast_frame_discard;
*ptr++ = vpath_info->rx_frm_transferred;
*ptr++ = vpath_info->rxd_returned;
*ptr++ = vpath_info->rx_mpa_len_fail_frms;
*ptr++ = vpath_info->rx_mpa_mrk_fail_frms;
*ptr++ = vpath_info->rx_mpa_crc_fail_frms;
*ptr++ = vpath_info->rx_permitted_frms;
*ptr++ = vpath_info->rx_vp_reset_discarded_frms;
*ptr++ = vpath_info->rx_wol_frms;
*ptr++ = vpath_info->tx_vp_reset_discarded_frms;
}
*ptr++ = 0;
*ptr++ = vdev->stats.vpaths_open;
*ptr++ = vdev->stats.vpath_open_fail;
*ptr++ = vdev->stats.link_up;
*ptr++ = vdev->stats.link_down;
for (k = 0; k < vdev->no_of_vpath; k++) {
*ptr += vdev->vpaths[k].fifo.stats.tx_frms;
*(ptr + 1) += vdev->vpaths[k].fifo.stats.tx_errors;
*(ptr + 2) += vdev->vpaths[k].fifo.stats.tx_bytes;
*(ptr + 3) += vdev->vpaths[k].fifo.stats.txd_not_free;
*(ptr + 4) += vdev->vpaths[k].fifo.stats.txd_out_of_desc;
*(ptr + 5) += vdev->vpaths[k].ring.stats.rx_frms;
*(ptr + 6) += vdev->vpaths[k].ring.stats.rx_errors;
*(ptr + 7) += vdev->vpaths[k].ring.stats.rx_bytes;
*(ptr + 8) += vdev->vpaths[k].ring.stats.rx_mcast;
*(ptr + 9) += vdev->vpaths[k].fifo.stats.pci_map_fail +
vdev->vpaths[k].ring.stats.pci_map_fail;
*(ptr + 10) += vdev->vpaths[k].ring.stats.skb_alloc_fail;
}
ptr += 12;
kfree(xmac_stats);
kfree(sw_stats);
kfree(hw_stats);
}
static void vxge_ethtool_get_strings(struct net_device *dev, u32 stringset,
u8 *data)
{
int stat_size = 0;
int i, j;
struct vxgedev *vdev = netdev_priv(dev);
switch (stringset) {
case ETH_SS_STATS:
vxge_add_string("VPATH STATISTICS%s\t\t\t",
&stat_size, data, "");
for (i = 0; i < vdev->no_of_vpath; i++) {
vxge_add_string("tx_ttl_eth_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_ttl_eth_octects_%d\t\t",
&stat_size, data, i);
vxge_add_string("tx_data_octects_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_mcast_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_bcast_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_ucast_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_tagged_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_vld_ip_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_vld_ip_octects_%d\t\t",
&stat_size, data, i);
vxge_add_string("tx_icmp_%d\t\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_tcp_%d\t\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_rst_tcp_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_udp_%d\t\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_unknown_proto_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_lost_ip_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_parse_error_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_tcp_offload_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_retx_tcp_offload_%d\t\t",
&stat_size, data, i);
vxge_add_string("tx_lost_ip_offload_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_eth_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_vld_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_offload_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_eth_octects_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_data_octects_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_offload_octects_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_vld_mcast_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_vld_bcast_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_accepted_ucast_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_accepted_nucast_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_tagged_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_long_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_usized_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_osized_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_frag_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_jabber_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_64_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_65_127_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_128_255_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_256_511_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_512_1023_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_1024_1518_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_1519_4095_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_4096_8191_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_8192_max_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_gt_max_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_ip%d\t\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_accepted_ip_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_ip_octects_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_err_ip_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_icmp_%d\t\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_tcp_%d\t\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_udp_%d\t\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_err_tcp_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_lost_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_lost_ip_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_lost_ip_offload_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_various_discard_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_sleep_discard_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_red_discard_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_queue_full_discard_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_mpa_ok_frms_%d\t\t\t",
&stat_size, data, i);
}
vxge_add_string("\nAGGR STATISTICS%s\t\t\t\t",
&stat_size, data, "");
for (i = 0; i < vdev->max_config_port; i++) {
vxge_add_string("tx_frms_%d\t\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_data_octects_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_mcast_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_bcast_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_discarded_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("tx_errored_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_frms_%d\t\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_data_octects_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_mcast_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_bcast_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_discarded_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_errored_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_unknown_slow_proto_frms_%d\t",
&stat_size, data, i);
}
vxge_add_string("\nPORT STATISTICS%s\t\t\t\t",
&stat_size, data, "");
for (i = 0; i < vdev->max_config_port; i++) {
vxge_add_string("tx_ttl_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_ttl_octects_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_data_octects_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_mcast_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_bcast_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_ucast_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_tagged_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_vld_ip_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_vld_ip_octects_%d\t\t",
&stat_size, data, i);
vxge_add_string("tx_icmp_%d\t\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_tcp_%d\t\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_rst_tcp_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_udp_%d\t\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_parse_error_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_unknown_protocol_%d\t\t",
&stat_size, data, i);
vxge_add_string("tx_pause_ctrl_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("tx_marker_pdu_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("tx_lacpdu_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_drop_ip_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_marker_resp_pdu_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("tx_xgmii_char2_match_%d\t\t",
&stat_size, data, i);
vxge_add_string("tx_xgmii_char1_match_%d\t\t",
&stat_size, data, i);
vxge_add_string("tx_xgmii_column2_match_%d\t\t",
&stat_size, data, i);
vxge_add_string("tx_xgmii_column1_match_%d\t\t",
&stat_size, data, i);
vxge_add_string("tx_any_err_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_drop_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_vld_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_offload_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_octects_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_data_octects_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_offload_octects_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_vld_mcast_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_vld_bcast_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_accepted_ucast_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_accepted_nucast_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_tagged_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_long_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_usized_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_osized_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_frag_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_jabber_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_64_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_65_127_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_128_255_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_256_511_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_512_1023_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_1024_1518_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_1519_4095_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_4096_8191_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_8192_max_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_ttl_gt_max_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_ip_%d\t\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_accepted_ip_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_ip_octets_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_err_ip_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_icmp_%d\t\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_tcp_%d\t\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_udp_%d\t\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_err_tcp_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_pause_count_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_pause_ctrl_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_unsup_ctrl_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_fcs_err_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_in_rng_len_err_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_out_rng_len_err_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_drop_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_discard_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_drop_ip_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_drop_udp_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_marker_pdu_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_lacpdu_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_unknown_pdu_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_marker_resp_pdu_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_fcs_discard_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_illegal_pdu_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_switch_discard_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_len_discard_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_rpa_discard_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_l2_mgmt_discard_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_rts_discard_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_trash_discard_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_buff_full_discard_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_red_discard_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_xgmii_ctrl_err_cnt_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_xgmii_data_err_cnt_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_xgmii_char1_match_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_xgmii_err_sym_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_xgmii_column1_match_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_xgmii_char2_match_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_local_fault_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_xgmii_column2_match_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_jettison_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_remote_fault_%d\t\t\t",
&stat_size, data, i);
}
vxge_add_string("\n SOFTWARE STATISTICS%s\t\t\t",
&stat_size, data, "");
for (i = 0; i < vdev->no_of_vpath; i++) {
vxge_add_string("soft_reset_cnt_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("unknown_alarms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("network_sustained_fault_%d\t\t",
&stat_size, data, i);
vxge_add_string("network_sustained_ok_%d\t\t",
&stat_size, data, i);
vxge_add_string("kdfcctl_fifo0_overwrite_%d\t\t",
&stat_size, data, i);
vxge_add_string("kdfcctl_fifo0_poison_%d\t\t",
&stat_size, data, i);
vxge_add_string("kdfcctl_fifo0_dma_error_%d\t\t",
&stat_size, data, i);
vxge_add_string("dblgen_fifo0_overflow_%d\t\t",
&stat_size, data, i);
vxge_add_string("statsb_pif_chain_error_%d\t\t",
&stat_size, data, i);
vxge_add_string("statsb_drop_timeout_%d\t\t",
&stat_size, data, i);
vxge_add_string("target_illegal_access_%d\t\t",
&stat_size, data, i);
vxge_add_string("ini_serr_det_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("prc_ring_bumps_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("prc_rxdcm_sc_err_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("prc_rxdcm_sc_abort_%d\t\t",
&stat_size, data, i);
vxge_add_string("prc_quanta_size_err_%d\t\t",
&stat_size, data, i);
vxge_add_string("ring_full_cnt_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("ring_usage_cnt_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("ring_usage_max_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("ring_reserve_free_swaps_cnt_%d\t",
&stat_size, data, i);
vxge_add_string("ring_total_compl_cnt_%d\t\t",
&stat_size, data, i);
for (j = 0; j < VXGE_HW_DTR_MAX_T_CODE; j++)
vxge_add_string("rxd_t_code_err_cnt%d_%d\t\t",
&stat_size, data, j, i);
vxge_add_string("fifo_full_cnt_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("fifo_usage_cnt_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("fifo_usage_max_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("fifo_reserve_free_swaps_cnt_%d\t",
&stat_size, data, i);
vxge_add_string("fifo_total_compl_cnt_%d\t\t",
&stat_size, data, i);
vxge_add_string("fifo_total_posts_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("fifo_total_buffers_%d\t\t",
&stat_size, data, i);
for (j = 0; j < VXGE_HW_DTR_MAX_T_CODE; j++)
vxge_add_string("txd_t_code_err_cnt%d_%d\t\t",
&stat_size, data, j, i);
}
vxge_add_string("\n HARDWARE STATISTICS%s\t\t\t",
&stat_size, data, "");
for (i = 0; i < vdev->no_of_vpath; i++) {
vxge_add_string("ini_num_mwr_sent_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("ini_num_mrd_sent_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("ini_num_cpl_rcvd_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("ini_num_mwr_byte_sent_%d\t\t",
&stat_size, data, i);
vxge_add_string("ini_num_cpl_byte_rcvd_%d\t\t",
&stat_size, data, i);
vxge_add_string("wrcrdtarb_xoff_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rdcrdtarb_xoff_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("vpath_genstats_count0_%d\t\t",
&stat_size, data, i);
vxge_add_string("vpath_genstats_count1_%d\t\t",
&stat_size, data, i);
vxge_add_string("vpath_genstats_count2_%d\t\t",
&stat_size, data, i);
vxge_add_string("vpath_genstats_count3_%d\t\t",
&stat_size, data, i);
vxge_add_string("vpath_genstats_count4_%d\t\t",
&stat_size, data, i);
vxge_add_string("vpath_genstats_count5_%d\t\t",
&stat_size, data, i);
vxge_add_string("prog_event_vnum0_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("prog_event_vnum1_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("prog_event_vnum2_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("prog_event_vnum3_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_multi_cast_frame_discard_%d\t",
&stat_size, data, i);
vxge_add_string("rx_frm_transferred_%d\t\t",
&stat_size, data, i);
vxge_add_string("rxd_returned_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("rx_mpa_len_fail_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_mpa_mrk_fail_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_mpa_crc_fail_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_permitted_frms_%d\t\t",
&stat_size, data, i);
vxge_add_string("rx_vp_reset_discarded_frms_%d\t",
&stat_size, data, i);
vxge_add_string("rx_wol_frms_%d\t\t\t",
&stat_size, data, i);
vxge_add_string("tx_vp_reset_discarded_frms_%d\t",
&stat_size, data, i);
}
memcpy(data + stat_size, &ethtool_driver_stats_keys,
sizeof(ethtool_driver_stats_keys));
}
}
static int vxge_ethtool_get_regs_len(struct net_device *dev)
{
struct vxgedev *vdev = netdev_priv(dev);
return sizeof(struct vxge_hw_vpath_reg) * vdev->no_of_vpath;
}
static int vxge_ethtool_get_sset_count(struct net_device *dev, int sset)
{
struct vxgedev *vdev = netdev_priv(dev);
switch (sset) {
case ETH_SS_STATS:
return VXGE_TITLE_LEN +
(vdev->no_of_vpath * VXGE_HW_VPATH_STATS_LEN) +
(vdev->max_config_port * VXGE_HW_AGGR_STATS_LEN) +
(vdev->max_config_port * VXGE_HW_PORT_STATS_LEN) +
(vdev->no_of_vpath * VXGE_HW_VPATH_TX_STATS_LEN) +
(vdev->no_of_vpath * VXGE_HW_VPATH_RX_STATS_LEN) +
(vdev->no_of_vpath * VXGE_SW_STATS_LEN) +
DRIVER_STAT_LEN;
default:
return -EOPNOTSUPP;
}
}
static int vxge_fw_flash(struct net_device *dev, struct ethtool_flash *parms)
{
struct vxgedev *vdev = netdev_priv(dev);
if (vdev->max_vpath_supported != VXGE_HW_MAX_VIRTUAL_PATHS) {
printk(KERN_INFO "Single Function Mode is required to flash the"
" firmware\n");
return -EINVAL;
}
if (netif_running(dev)) {
printk(KERN_INFO "Interface %s must be down to flash the "
"firmware\n", dev->name);
return -EBUSY;
}
return vxge_fw_upgrade(vdev, parms->data, 1);
}
static const struct ethtool_ops vxge_ethtool_ops = {
.get_drvinfo = vxge_ethtool_gdrvinfo,
.get_regs_len = vxge_ethtool_get_regs_len,
.get_regs = vxge_ethtool_gregs,
.get_link = ethtool_op_get_link,
.get_pauseparam = vxge_ethtool_getpause_data,
.set_pauseparam = vxge_ethtool_setpause_data,
.get_strings = vxge_ethtool_get_strings,
.set_phys_id = vxge_ethtool_idnic,
.get_sset_count = vxge_ethtool_get_sset_count,
.get_ethtool_stats = vxge_get_ethtool_stats,
.flash_device = vxge_fw_flash,
.get_link_ksettings = vxge_ethtool_get_link_ksettings,
.set_link_ksettings = vxge_ethtool_set_link_ksettings,
};
void vxge_initialize_ethtool_ops(struct net_device *ndev)
{
ndev->ethtool_ops = &vxge_ethtool_ops;
}
/******************************************************************************
* This software may be used and distributed according to the terms of
* the GNU General Public License (GPL), incorporated herein by reference.
* Drivers based on or derived from this code fall under the GPL and must
* retain the authorship, copyright and license notice. This file is not
* a complete program and may only be used when the entire operating
* system is licensed under the GPL.
* See the file COPYING in this distribution for more information.
*
* vxge-ethtool.h: Driver for Exar Corp's X3100 Series 10GbE PCIe I/O
* Virtualized Server Adapter.
* Copyright(c) 2002-2010 Exar Corp.
******************************************************************************/
#ifndef _VXGE_ETHTOOL_H
#define _VXGE_ETHTOOL_H
#include "vxge-main.h"
/* Ethtool related variables and Macros. */
static int vxge_ethtool_get_sset_count(struct net_device *dev, int sset);
#define VXGE_TITLE_LEN 5
#define VXGE_HW_VPATH_STATS_LEN 27
#define VXGE_HW_AGGR_STATS_LEN 13
#define VXGE_HW_PORT_STATS_LEN 94
#define VXGE_HW_VPATH_TX_STATS_LEN 19
#define VXGE_HW_VPATH_RX_STATS_LEN 42
#define VXGE_SW_STATS_LEN 60
#define VXGE_HW_STATS_LEN (VXGE_HW_VPATH_STATS_LEN +\
VXGE_HW_AGGR_STATS_LEN +\
VXGE_HW_PORT_STATS_LEN +\
VXGE_HW_VPATH_TX_STATS_LEN +\
VXGE_HW_VPATH_RX_STATS_LEN)
#define DRIVER_STAT_LEN (sizeof(ethtool_driver_stats_keys)/ETH_GSTRING_LEN)
#define STAT_LEN (VXGE_HW_STATS_LEN + DRIVER_STAT_LEN + VXGE_SW_STATS_LEN)
/* Maximum flicker time of adapter LED */
#define VXGE_MAX_FLICKER_TIME (60 * HZ) /* 60 seconds */
#define VXGE_FLICKER_ON 1
#define VXGE_FLICKER_OFF 0
#define vxge_add_string(fmt, size, buf, ...) {\
snprintf(buf + *size, ETH_GSTRING_LEN, fmt, __VA_ARGS__); \
*size += ETH_GSTRING_LEN; \
}
#endif /*_VXGE_ETHTOOL_H*/
This source diff could not be displayed because it is too large. You can view the blob instead.
/******************************************************************************
* This software may be used and distributed according to the terms of
* the GNU General Public License (GPL), incorporated herein by reference.
* Drivers based on or derived from this code fall under the GPL and must
* retain the authorship, copyright and license notice. This file is not
* a complete program and may only be used when the entire operating
* system is licensed under the GPL.
* See the file COPYING in this distribution for more information.
*
* vxge-main.h: Driver for Exar Corp's X3100 Series 10GbE PCIe I/O
* Virtualized Server Adapter.
* Copyright(c) 2002-2010 Exar Corp.
******************************************************************************/
#ifndef VXGE_MAIN_H
#define VXGE_MAIN_H
#include "vxge-traffic.h"
#include "vxge-config.h"
#include "vxge-version.h"
#include <linux/list.h>
#include <linux/bitops.h>
#include <linux/if_vlan.h>
#define VXGE_DRIVER_NAME "vxge"
#define VXGE_DRIVER_VENDOR "Neterion, Inc"
#define VXGE_DRIVER_FW_VERSION_MAJOR 1
#define DRV_VERSION VXGE_VERSION_MAJOR"."VXGE_VERSION_MINOR"."\
VXGE_VERSION_FIX"."VXGE_VERSION_BUILD"-"\
VXGE_VERSION_FOR
#define PCI_DEVICE_ID_TITAN_WIN 0x5733
#define PCI_DEVICE_ID_TITAN_UNI 0x5833
#define VXGE_HW_TITAN1_PCI_REVISION 1
#define VXGE_HW_TITAN1A_PCI_REVISION 2
#define VXGE_USE_DEFAULT 0xffffffff
#define VXGE_HW_VPATH_MSIX_ACTIVE 4
#define VXGE_ALARM_MSIX_ID 2
#define VXGE_HW_RXSYNC_FREQ_CNT 4
#define VXGE_LL_WATCH_DOG_TIMEOUT (15 * HZ)
#define VXGE_LL_RX_COPY_THRESHOLD 256
#define VXGE_DEF_FIFO_LENGTH 84
#define NO_STEERING 0
#define PORT_STEERING 0x1
#define RTH_STEERING 0x2
#define RX_TOS_STEERING 0x3
#define RX_VLAN_STEERING 0x4
#define RTH_BUCKET_SIZE 4
#define TX_PRIORITY_STEERING 1
#define TX_VLAN_STEERING 2
#define TX_PORT_STEERING 3
#define TX_MULTIQ_STEERING 4
#define VXGE_HW_MAC_ADDR_LEARN_DEFAULT VXGE_HW_RTS_MAC_DISABLE
#define VXGE_TTI_BTIMER_VAL 250000
#define VXGE_TTI_LTIMER_VAL 1000
#define VXGE_T1A_TTI_LTIMER_VAL 80
#define VXGE_TTI_RTIMER_VAL 0
#define VXGE_TTI_RTIMER_ADAPT_VAL 10
#define VXGE_T1A_TTI_RTIMER_VAL 400
#define VXGE_RTI_BTIMER_VAL 250
#define VXGE_RTI_LTIMER_VAL 100
#define VXGE_RTI_RTIMER_VAL 0
#define VXGE_RTI_RTIMER_ADAPT_VAL 15
#define VXGE_FIFO_INDICATE_MAX_PKTS VXGE_DEF_FIFO_LENGTH
#define VXGE_ISR_POLLING_CNT 8
#define VXGE_MAX_CONFIG_DEV 0xFF
#define VXGE_EXEC_MODE_DISABLE 0
#define VXGE_EXEC_MODE_ENABLE 1
#define VXGE_MAX_CONFIG_PORT 1
#define VXGE_ALL_VID_DISABLE 0
#define VXGE_ALL_VID_ENABLE 1
#define VXGE_PAUSE_CTRL_DISABLE 0
#define VXGE_PAUSE_CTRL_ENABLE 1
#define TTI_TX_URANGE_A 5
#define TTI_TX_URANGE_B 15
#define TTI_TX_URANGE_C 40
#define TTI_TX_UFC_A 5
#define TTI_TX_UFC_B 40
#define TTI_TX_UFC_C 60
#define TTI_TX_UFC_D 100
#define TTI_T1A_TX_UFC_A 30
#define TTI_T1A_TX_UFC_B 80
/* Slope - (max_mtu - min_mtu)/(max_mtu_ufc - min_mtu_ufc) */
/* Slope - 93 */
/* 60 - 9k Mtu, 140 - 1.5k mtu */
#define TTI_T1A_TX_UFC_C(mtu) (60 + ((VXGE_HW_MAX_MTU - mtu) / 93))
/* Slope - 37 */
/* 100 - 9k Mtu, 300 - 1.5k mtu */
#define TTI_T1A_TX_UFC_D(mtu) (100 + ((VXGE_HW_MAX_MTU - mtu) / 37))
#define RTI_RX_URANGE_A 5
#define RTI_RX_URANGE_B 15
#define RTI_RX_URANGE_C 40
#define RTI_T1A_RX_URANGE_A 1
#define RTI_T1A_RX_URANGE_B 20
#define RTI_T1A_RX_URANGE_C 50
#define RTI_RX_UFC_A 1
#define RTI_RX_UFC_B 5
#define RTI_RX_UFC_C 10
#define RTI_RX_UFC_D 15
#define RTI_T1A_RX_UFC_B 20
#define RTI_T1A_RX_UFC_C 50
#define RTI_T1A_RX_UFC_D 60
/*
* The interrupt rate is maintained at 3k per second with the moderation
* parameters for most traffic but not all. This is the maximum interrupt
* count allowed per function with INTA or per vector in the case of
* MSI-X in a 10 millisecond time period. Enabled only for Titan 1A.
*/
#define VXGE_T1A_MAX_INTERRUPT_COUNT 100
#define VXGE_T1A_MAX_TX_INTERRUPT_COUNT 200
/* Milli secs timer period */
#define VXGE_TIMER_DELAY 10000
#define VXGE_LL_MAX_FRAME_SIZE(dev) ((dev)->mtu + VXGE_HW_MAC_HEADER_MAX_SIZE)
#define is_sriov(function_mode) \
((function_mode == VXGE_HW_FUNCTION_MODE_SRIOV) || \
(function_mode == VXGE_HW_FUNCTION_MODE_SRIOV_8) || \
(function_mode == VXGE_HW_FUNCTION_MODE_SRIOV_4))
enum vxge_reset_event {
/* reset events */
VXGE_LL_VPATH_RESET = 0,
VXGE_LL_DEVICE_RESET = 1,
VXGE_LL_FULL_RESET = 2,
VXGE_LL_START_RESET = 3,
VXGE_LL_COMPL_RESET = 4
};
/* These flags represent the devices temporary state */
enum vxge_device_state_t {
__VXGE_STATE_RESET_CARD = 0,
__VXGE_STATE_CARD_UP
};
enum vxge_mac_addr_state {
/* mac address states */
VXGE_LL_MAC_ADDR_IN_LIST = 0,
VXGE_LL_MAC_ADDR_IN_DA_TABLE = 1
};
struct vxge_drv_config {
int config_dev_cnt;
int total_dev_cnt;
int g_no_cpus;
unsigned int vpath_per_dev;
};
struct macInfo {
unsigned char macaddr[ETH_ALEN];
unsigned char macmask[ETH_ALEN];
unsigned int vpath_no;
enum vxge_mac_addr_state state;
};
struct vxge_config {
int tx_pause_enable;
int rx_pause_enable;
int napi_weight;
int intr_type;
#define INTA 0
#define MSI 1
#define MSI_X 2
int addr_learn_en;
u32 rth_steering:2,
rth_algorithm:2,
rth_hash_type_tcpipv4:1,
rth_hash_type_ipv4:1,
rth_hash_type_tcpipv6:1,
rth_hash_type_ipv6:1,
rth_hash_type_tcpipv6ex:1,
rth_hash_type_ipv6ex:1,
rth_bkt_sz:8;
int rth_jhash_golden_ratio;
int tx_steering_type;
int fifo_indicate_max_pkts;
struct vxge_hw_device_hw_info device_hw_info;
};
struct vxge_msix_entry {
/* Mimicing the msix_entry struct of Kernel. */
u16 vector;
u16 entry;
u16 in_use;
void *arg;
};
/* Software Statistics */
struct vxge_sw_stats {
/* Virtual Path */
unsigned long vpaths_open;
unsigned long vpath_open_fail;
/* Misc. */
unsigned long link_up;
unsigned long link_down;
};
struct vxge_mac_addrs {
struct list_head item;
u64 macaddr;
u64 macmask;
enum vxge_mac_addr_state state;
};
struct vxgedev;
struct vxge_fifo_stats {
struct u64_stats_sync syncp;
u64 tx_frms;
u64 tx_bytes;
unsigned long tx_errors;
unsigned long txd_not_free;
unsigned long txd_out_of_desc;
unsigned long pci_map_fail;
};
struct vxge_fifo {
struct net_device *ndev;
struct pci_dev *pdev;
struct __vxge_hw_fifo *handle;
struct netdev_queue *txq;
int tx_steering_type;
int indicate_max_pkts;
/* Adaptive interrupt moderation parameters used in T1A */
unsigned long interrupt_count;
unsigned long jiffies;
u32 tx_vector_no;
/* Tx stats */
struct vxge_fifo_stats stats;
} ____cacheline_aligned;
struct vxge_ring_stats {
struct u64_stats_sync syncp;
u64 rx_frms;
u64 rx_mcast;
u64 rx_bytes;
unsigned long rx_errors;
unsigned long rx_dropped;
unsigned long prev_rx_frms;
unsigned long pci_map_fail;
unsigned long skb_alloc_fail;
};
struct vxge_ring {
struct net_device *ndev;
struct pci_dev *pdev;
struct __vxge_hw_ring *handle;
/* The vpath id maintained in the driver -
* 0 to 'maximum_vpaths_in_function - 1'
*/
int driver_id;
/* Adaptive interrupt moderation parameters used in T1A */
unsigned long interrupt_count;
unsigned long jiffies;
/* copy of the flag indicating whether rx_hwts is to be used */
u32 rx_hwts:1;
int pkts_processed;
int budget;
struct napi_struct napi;
struct napi_struct *napi_p;
#define VXGE_MAX_MAC_ADDR_COUNT 30
int vlan_tag_strip;
u32 rx_vector_no;
enum vxge_hw_status last_status;
/* Rx stats */
struct vxge_ring_stats stats;
} ____cacheline_aligned;
struct vxge_vpath {
struct vxge_fifo fifo;
struct vxge_ring ring;
struct __vxge_hw_vpath_handle *handle;
/* Actual vpath id for this vpath in the device - 0 to 16 */
int device_id;
int max_mac_addr_cnt;
int is_configured;
int is_open;
struct vxgedev *vdev;
u8 macaddr[ETH_ALEN];
u8 macmask[ETH_ALEN];
#define VXGE_MAX_LEARN_MAC_ADDR_CNT 2048
/* mac addresses currently programmed into NIC */
u16 mac_addr_cnt;
u16 mcast_addr_cnt;
struct list_head mac_addr_list;
u32 level_err;
u32 level_trace;
};
#define VXGE_COPY_DEBUG_INFO_TO_LL(vdev, err, trace) { \
for (i = 0; i < vdev->no_of_vpath; i++) { \
vdev->vpaths[i].level_err = err; \
vdev->vpaths[i].level_trace = trace; \
} \
vdev->level_err = err; \
vdev->level_trace = trace; \
}
struct vxgedev {
struct net_device *ndev;
struct pci_dev *pdev;
struct __vxge_hw_device *devh;
unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
int vlan_tag_strip;
struct vxge_config config;
unsigned long state;
/* Indicates which vpath to reset */
unsigned long vp_reset;
/* Timer used for polling vpath resets */
struct timer_list vp_reset_timer;
/* Timer used for polling vpath lockup */
struct timer_list vp_lockup_timer;
/*
* Flags to track whether device is in All Multicast
* or in promiscuous mode.
*/
u16 all_multi_flg;
/* A flag indicating whether rx_hwts is to be used or not. */
u32 rx_hwts:1,
titan1:1;
struct vxge_msix_entry *vxge_entries;
struct msix_entry *entries;
/*
* 4 for each vpath * 17;
* total is 68
*/
#define VXGE_MAX_REQUESTED_MSIX 68
#define VXGE_INTR_STRLEN 80
char desc[VXGE_MAX_REQUESTED_MSIX][VXGE_INTR_STRLEN];
enum vxge_hw_event cric_err_event;
int max_vpath_supported;
int no_of_vpath;
struct napi_struct napi;
/* A debug option, when enabled and if error condition occurs,
* the driver will do following steps:
* - mask all interrupts
* - Not clear the source of the alarm
* - gracefully stop all I/O
* A diagnostic dump of register and stats at this point
* reveals very useful information.
*/
int exec_mode;
int max_config_port;
struct vxge_vpath *vpaths;
struct __vxge_hw_vpath_handle *vp_handles[VXGE_HW_MAX_VIRTUAL_PATHS];
void __iomem *bar0;
struct vxge_sw_stats stats;
int mtu;
/* Below variables are used for vpath selection to transmit a packet */
u8 vpath_selector[VXGE_HW_MAX_VIRTUAL_PATHS];
u64 vpaths_deployed;
u32 intr_cnt;
u32 level_err;
u32 level_trace;
char fw_version[VXGE_HW_FW_STRLEN];
struct work_struct reset_task;
};
struct vxge_rx_priv {
struct sk_buff *skb;
unsigned char *skb_data;
dma_addr_t data_dma;
dma_addr_t data_size;
};
struct vxge_tx_priv {
struct sk_buff *skb;
dma_addr_t dma_buffers[MAX_SKB_FRAGS+1];
};
#define VXGE_MODULE_PARAM_INT(p, val) \
static int p = val; \
module_param(p, int, 0)
static inline
void vxge_os_timer(struct timer_list *timer, void (*func)(struct timer_list *),
unsigned long timeout)
{
timer_setup(timer, func, 0);
mod_timer(timer, jiffies + timeout);
}
void vxge_initialize_ethtool_ops(struct net_device *ndev);
int vxge_fw_upgrade(struct vxgedev *vdev, char *fw_name, int override);
/* #define VXGE_DEBUG_INIT: debug for initialization functions
* #define VXGE_DEBUG_TX : debug transmit related functions
* #define VXGE_DEBUG_RX : debug recevice related functions
* #define VXGE_DEBUG_MEM : debug memory module
* #define VXGE_DEBUG_LOCK: debug locks
* #define VXGE_DEBUG_SEM : debug semaphore
* #define VXGE_DEBUG_ENTRYEXIT: debug functions by adding entry exit statements
*/
#define VXGE_DEBUG_INIT 0x00000001
#define VXGE_DEBUG_TX 0x00000002
#define VXGE_DEBUG_RX 0x00000004
#define VXGE_DEBUG_MEM 0x00000008
#define VXGE_DEBUG_LOCK 0x00000010
#define VXGE_DEBUG_SEM 0x00000020
#define VXGE_DEBUG_ENTRYEXIT 0x00000040
#define VXGE_DEBUG_INTR 0x00000080
#define VXGE_DEBUG_LL_CONFIG 0x00000100
/* Debug tracing for VXGE driver */
#ifndef VXGE_DEBUG_MASK
#define VXGE_DEBUG_MASK 0x0
#endif
#if (VXGE_DEBUG_LL_CONFIG & VXGE_DEBUG_MASK)
#define vxge_debug_ll_config(level, fmt, ...) \
vxge_debug_ll(level, VXGE_DEBUG_LL_CONFIG, fmt, ##__VA_ARGS__)
#else
#define vxge_debug_ll_config(level, fmt, ...) no_printk(fmt, ##__VA_ARGS__)
#endif
#if (VXGE_DEBUG_INIT & VXGE_DEBUG_MASK)
#define vxge_debug_init(level, fmt, ...) \
vxge_debug_ll(level, VXGE_DEBUG_INIT, fmt, ##__VA_ARGS__)
#else
#define vxge_debug_init(level, fmt, ...) no_printk(fmt, ##__VA_ARGS__)
#endif
#if (VXGE_DEBUG_TX & VXGE_DEBUG_MASK)
#define vxge_debug_tx(level, fmt, ...) \
vxge_debug_ll(level, VXGE_DEBUG_TX, fmt, ##__VA_ARGS__)
#else
#define vxge_debug_tx(level, fmt, ...) no_printk(fmt, ##__VA_ARGS__)
#endif
#if (VXGE_DEBUG_RX & VXGE_DEBUG_MASK)
#define vxge_debug_rx(level, fmt, ...) \
vxge_debug_ll(level, VXGE_DEBUG_RX, fmt, ##__VA_ARGS__)
#else
#define vxge_debug_rx(level, fmt, ...) no_printk(fmt, ##__VA_ARGS__)
#endif
#if (VXGE_DEBUG_MEM & VXGE_DEBUG_MASK)
#define vxge_debug_mem(level, fmt, ...) \
vxge_debug_ll(level, VXGE_DEBUG_MEM, fmt, ##__VA_ARGS__)
#else
#define vxge_debug_mem(level, fmt, ...) no_printk(fmt, ##__VA_ARGS__)
#endif
#if (VXGE_DEBUG_ENTRYEXIT & VXGE_DEBUG_MASK)
#define vxge_debug_entryexit(level, fmt, ...) \
vxge_debug_ll(level, VXGE_DEBUG_ENTRYEXIT, fmt, ##__VA_ARGS__)
#else
#define vxge_debug_entryexit(level, fmt, ...) no_printk(fmt, ##__VA_ARGS__)
#endif
#if (VXGE_DEBUG_INTR & VXGE_DEBUG_MASK)
#define vxge_debug_intr(level, fmt, ...) \
vxge_debug_ll(level, VXGE_DEBUG_INTR, fmt, ##__VA_ARGS__)
#else
#define vxge_debug_intr(level, fmt, ...) no_printk(fmt, ##__VA_ARGS__)
#endif
#define VXGE_DEVICE_DEBUG_LEVEL_SET(level, mask, vdev) {\
vxge_hw_device_debug_set((struct __vxge_hw_device *)vdev->devh, \
level, mask);\
VXGE_COPY_DEBUG_INFO_TO_LL(vdev, \
vxge_hw_device_error_level_get((struct __vxge_hw_device *) \
vdev->devh), \
vxge_hw_device_trace_level_get((struct __vxge_hw_device *) \
vdev->devh));\
}
#ifdef NETIF_F_GSO
#define vxge_tcp_mss(skb) (skb_shinfo(skb)->gso_size)
#define vxge_udp_mss(skb) (skb_shinfo(skb)->gso_size)
#define vxge_offload_type(skb) (skb_shinfo(skb)->gso_type)
#endif
#endif
This source diff could not be displayed because it is too large. You can view the blob instead.
/******************************************************************************
* This software may be used and distributed according to the terms of
* the GNU General Public License (GPL), incorporated herein by reference.
* Drivers based on or derived from this code fall under the GPL and must
* retain the authorship, copyright and license notice. This file is not
* a complete program and may only be used when the entire operating
* system is licensed under the GPL.
* See the file COPYING in this distribution for more information.
*
* vxge-traffic.c: Driver for Exar Corp's X3100 Series 10GbE PCIe I/O
* Virtualized Server Adapter.
* Copyright(c) 2002-2010 Exar Corp.
******************************************************************************/
#include <linux/etherdevice.h>
#include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/prefetch.h>
#include "vxge-traffic.h"
#include "vxge-config.h"
#include "vxge-main.h"
/*
* vxge_hw_vpath_intr_enable - Enable vpath interrupts.
* @vp: Virtual Path handle.
*
* Enable vpath interrupts. The function is to be executed the last in
* vpath initialization sequence.
*
* See also: vxge_hw_vpath_intr_disable()
*/
enum vxge_hw_status vxge_hw_vpath_intr_enable(struct __vxge_hw_vpath_handle *vp)
{
struct __vxge_hw_virtualpath *vpath;
struct vxge_hw_vpath_reg __iomem *vp_reg;
enum vxge_hw_status status = VXGE_HW_OK;
if (vp == NULL) {
status = VXGE_HW_ERR_INVALID_HANDLE;
goto exit;
}
vpath = vp->vpath;
if (vpath->vp_open == VXGE_HW_VP_NOT_OPEN) {
status = VXGE_HW_ERR_VPATH_NOT_OPEN;
goto exit;
}
vp_reg = vpath->vp_reg;
writeq(VXGE_HW_INTR_MASK_ALL, &vp_reg->kdfcctl_errors_reg);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->general_errors_reg);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->pci_config_errors_reg);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->mrpcim_to_vpath_alarm_reg);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->srpcim_to_vpath_alarm_reg);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->vpath_ppif_int_status);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->srpcim_msg_to_vpath_reg);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->vpath_pcipif_int_status);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->prc_alarm_reg);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->wrdma_alarm_status);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->asic_ntwk_vp_err_reg);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->xgmac_vp_int_status);
readq(&vp_reg->vpath_general_int_status);
/* Mask unwanted interrupts */
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->vpath_pcipif_int_mask);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->srpcim_msg_to_vpath_mask);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->srpcim_to_vpath_alarm_mask);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->mrpcim_to_vpath_alarm_mask);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->pci_config_errors_mask);
/* Unmask the individual interrupts */
writeq((u32)vxge_bVALn((VXGE_HW_GENERAL_ERRORS_REG_DBLGEN_FIFO1_OVRFLOW|
VXGE_HW_GENERAL_ERRORS_REG_DBLGEN_FIFO2_OVRFLOW|
VXGE_HW_GENERAL_ERRORS_REG_STATSB_DROP_TIMEOUT_REQ|
VXGE_HW_GENERAL_ERRORS_REG_STATSB_PIF_CHAIN_ERR), 0, 32),
&vp_reg->general_errors_mask);
__vxge_hw_pio_mem_write32_upper(
(u32)vxge_bVALn((VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO1_OVRWR|
VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO2_OVRWR|
VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO1_POISON|
VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO2_POISON|
VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO1_DMA_ERR|
VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO2_DMA_ERR), 0, 32),
&vp_reg->kdfcctl_errors_mask);
__vxge_hw_pio_mem_write32_upper(0, &vp_reg->vpath_ppif_int_mask);
__vxge_hw_pio_mem_write32_upper(
(u32)vxge_bVALn(VXGE_HW_PRC_ALARM_REG_PRC_RING_BUMP, 0, 32),
&vp_reg->prc_alarm_mask);
__vxge_hw_pio_mem_write32_upper(0, &vp_reg->wrdma_alarm_mask);
__vxge_hw_pio_mem_write32_upper(0, &vp_reg->xgmac_vp_int_mask);
if (vpath->hldev->first_vp_id != vpath->vp_id)
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->asic_ntwk_vp_err_mask);
else
__vxge_hw_pio_mem_write32_upper((u32)vxge_bVALn((
VXGE_HW_ASIC_NTWK_VP_ERR_REG_XMACJ_NTWK_REAFFIRMED_FAULT |
VXGE_HW_ASIC_NTWK_VP_ERR_REG_XMACJ_NTWK_REAFFIRMED_OK), 0, 32),
&vp_reg->asic_ntwk_vp_err_mask);
__vxge_hw_pio_mem_write32_upper(0,
&vp_reg->vpath_general_int_mask);
exit:
return status;
}
/*
* vxge_hw_vpath_intr_disable - Disable vpath interrupts.
* @vp: Virtual Path handle.
*
* Disable vpath interrupts. The function is to be executed the last in
* vpath initialization sequence.
*
* See also: vxge_hw_vpath_intr_enable()
*/
enum vxge_hw_status vxge_hw_vpath_intr_disable(
struct __vxge_hw_vpath_handle *vp)
{
struct __vxge_hw_virtualpath *vpath;
enum vxge_hw_status status = VXGE_HW_OK;
struct vxge_hw_vpath_reg __iomem *vp_reg;
if (vp == NULL) {
status = VXGE_HW_ERR_INVALID_HANDLE;
goto exit;
}
vpath = vp->vpath;
if (vpath->vp_open == VXGE_HW_VP_NOT_OPEN) {
status = VXGE_HW_ERR_VPATH_NOT_OPEN;
goto exit;
}
vp_reg = vpath->vp_reg;
__vxge_hw_pio_mem_write32_upper(
(u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->vpath_general_int_mask);
writeq(VXGE_HW_INTR_MASK_ALL, &vp_reg->kdfcctl_errors_mask);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->general_errors_mask);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->pci_config_errors_mask);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->mrpcim_to_vpath_alarm_mask);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->srpcim_to_vpath_alarm_mask);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->vpath_ppif_int_mask);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->srpcim_msg_to_vpath_mask);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->vpath_pcipif_int_mask);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->wrdma_alarm_mask);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->prc_alarm_mask);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->xgmac_vp_int_mask);
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
&vp_reg->asic_ntwk_vp_err_mask);
exit:
return status;
}
void vxge_hw_vpath_tti_ci_set(struct __vxge_hw_fifo *fifo)
{
struct vxge_hw_vpath_reg __iomem *vp_reg;
struct vxge_hw_vp_config *config;
u64 val64;
if (fifo->config->enable != VXGE_HW_FIFO_ENABLE)
return;
vp_reg = fifo->vp_reg;
config = container_of(fifo->config, struct vxge_hw_vp_config, fifo);
if (config->tti.timer_ci_en != VXGE_HW_TIM_TIMER_CI_ENABLE) {
config->tti.timer_ci_en = VXGE_HW_TIM_TIMER_CI_ENABLE;
val64 = readq(&vp_reg->tim_cfg1_int_num[VXGE_HW_VPATH_INTR_TX]);
val64 |= VXGE_HW_TIM_CFG1_INT_NUM_TIMER_CI;
fifo->tim_tti_cfg1_saved = val64;
writeq(val64, &vp_reg->tim_cfg1_int_num[VXGE_HW_VPATH_INTR_TX]);
}
}
void vxge_hw_vpath_dynamic_rti_ci_set(struct __vxge_hw_ring *ring)
{
u64 val64 = ring->tim_rti_cfg1_saved;
val64 |= VXGE_HW_TIM_CFG1_INT_NUM_TIMER_CI;
ring->tim_rti_cfg1_saved = val64;
writeq(val64, &ring->vp_reg->tim_cfg1_int_num[VXGE_HW_VPATH_INTR_RX]);
}
void vxge_hw_vpath_dynamic_tti_rtimer_set(struct __vxge_hw_fifo *fifo)
{
u64 val64 = fifo->tim_tti_cfg3_saved;
u64 timer = (fifo->rtimer * 1000) / 272;
val64 &= ~VXGE_HW_TIM_CFG3_INT_NUM_RTIMER_VAL(0x3ffffff);
if (timer)
val64 |= VXGE_HW_TIM_CFG3_INT_NUM_RTIMER_VAL(timer) |
VXGE_HW_TIM_CFG3_INT_NUM_RTIMER_EVENT_SF(5);
writeq(val64, &fifo->vp_reg->tim_cfg3_int_num[VXGE_HW_VPATH_INTR_TX]);
/* tti_cfg3_saved is not updated again because it is
* initialized at one place only - init time.
*/
}
void vxge_hw_vpath_dynamic_rti_rtimer_set(struct __vxge_hw_ring *ring)
{
u64 val64 = ring->tim_rti_cfg3_saved;
u64 timer = (ring->rtimer * 1000) / 272;
val64 &= ~VXGE_HW_TIM_CFG3_INT_NUM_RTIMER_VAL(0x3ffffff);
if (timer)
val64 |= VXGE_HW_TIM_CFG3_INT_NUM_RTIMER_VAL(timer) |
VXGE_HW_TIM_CFG3_INT_NUM_RTIMER_EVENT_SF(4);
writeq(val64, &ring->vp_reg->tim_cfg3_int_num[VXGE_HW_VPATH_INTR_RX]);
/* rti_cfg3_saved is not updated again because it is
* initialized at one place only - init time.
*/
}
/**
* vxge_hw_channel_msix_mask - Mask MSIX Vector.
* @channel: Channel for rx or tx handle
* @msix_id: MSIX ID
*
* The function masks the msix interrupt for the given msix_id
*
* Returns: 0
*/
void vxge_hw_channel_msix_mask(struct __vxge_hw_channel *channel, int msix_id)
{
__vxge_hw_pio_mem_write32_upper(
(u32)vxge_bVALn(vxge_mBIT(msix_id >> 2), 0, 32),
&channel->common_reg->set_msix_mask_vect[msix_id%4]);
}
/**
* vxge_hw_channel_msix_unmask - Unmask the MSIX Vector.
* @channel: Channel for rx or tx handle
* @msix_id: MSI ID
*
* The function unmasks the msix interrupt for the given msix_id
*
* Returns: 0
*/
void
vxge_hw_channel_msix_unmask(struct __vxge_hw_channel *channel, int msix_id)
{
__vxge_hw_pio_mem_write32_upper(
(u32)vxge_bVALn(vxge_mBIT(msix_id >> 2), 0, 32),
&channel->common_reg->clear_msix_mask_vect[msix_id%4]);
}
/**
* vxge_hw_channel_msix_clear - Unmask the MSIX Vector.
* @channel: Channel for rx or tx handle
* @msix_id: MSI ID
*
* The function unmasks the msix interrupt for the given msix_id
* if configured in MSIX oneshot mode
*
* Returns: 0
*/
void vxge_hw_channel_msix_clear(struct __vxge_hw_channel *channel, int msix_id)
{
__vxge_hw_pio_mem_write32_upper(
(u32) vxge_bVALn(vxge_mBIT(msix_id >> 2), 0, 32),
&channel->common_reg->clr_msix_one_shot_vec[msix_id % 4]);
}
/**
* vxge_hw_device_set_intr_type - Updates the configuration
* with new interrupt type.
* @hldev: HW device handle.
* @intr_mode: New interrupt type
*/
u32 vxge_hw_device_set_intr_type(struct __vxge_hw_device *hldev, u32 intr_mode)
{
if ((intr_mode != VXGE_HW_INTR_MODE_IRQLINE) &&
(intr_mode != VXGE_HW_INTR_MODE_MSIX) &&
(intr_mode != VXGE_HW_INTR_MODE_MSIX_ONE_SHOT) &&
(intr_mode != VXGE_HW_INTR_MODE_DEF))
intr_mode = VXGE_HW_INTR_MODE_IRQLINE;
hldev->config.intr_mode = intr_mode;
return intr_mode;
}
/**
* vxge_hw_device_intr_enable - Enable interrupts.
* @hldev: HW device handle.
*
* Enable Titan interrupts. The function is to be executed the last in
* Titan initialization sequence.
*
* See also: vxge_hw_device_intr_disable()
*/
void vxge_hw_device_intr_enable(struct __vxge_hw_device *hldev)
{
u32 i;
u64 val64;
u32 val32;
vxge_hw_device_mask_all(hldev);
for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
if (!(hldev->vpaths_deployed & vxge_mBIT(i)))
continue;
vxge_hw_vpath_intr_enable(
VXGE_HW_VIRTUAL_PATH_HANDLE(&hldev->virtual_paths[i]));
}
if (hldev->config.intr_mode == VXGE_HW_INTR_MODE_IRQLINE) {
val64 = hldev->tim_int_mask0[VXGE_HW_VPATH_INTR_TX] |
hldev->tim_int_mask0[VXGE_HW_VPATH_INTR_RX];
if (val64 != 0) {
writeq(val64, &hldev->common_reg->tim_int_status0);
writeq(~val64, &hldev->common_reg->tim_int_mask0);
}
val32 = hldev->tim_int_mask1[VXGE_HW_VPATH_INTR_TX] |
hldev->tim_int_mask1[VXGE_HW_VPATH_INTR_RX];
if (val32 != 0) {
__vxge_hw_pio_mem_write32_upper(val32,
&hldev->common_reg->tim_int_status1);
__vxge_hw_pio_mem_write32_upper(~val32,
&hldev->common_reg->tim_int_mask1);
}
}
val64 = readq(&hldev->common_reg->titan_general_int_status);
vxge_hw_device_unmask_all(hldev);
}
/**
* vxge_hw_device_intr_disable - Disable Titan interrupts.
* @hldev: HW device handle.
*
* Disable Titan interrupts.
*
* See also: vxge_hw_device_intr_enable()
*/
void vxge_hw_device_intr_disable(struct __vxge_hw_device *hldev)
{
u32 i;
vxge_hw_device_mask_all(hldev);
/* mask all the tim interrupts */
writeq(VXGE_HW_INTR_MASK_ALL, &hldev->common_reg->tim_int_mask0);
__vxge_hw_pio_mem_write32_upper(VXGE_HW_DEFAULT_32,
&hldev->common_reg->tim_int_mask1);
for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
if (!(hldev->vpaths_deployed & vxge_mBIT(i)))
continue;
vxge_hw_vpath_intr_disable(
VXGE_HW_VIRTUAL_PATH_HANDLE(&hldev->virtual_paths[i]));
}
}
/**
* vxge_hw_device_mask_all - Mask all device interrupts.
* @hldev: HW device handle.
*
* Mask all device interrupts.
*
* See also: vxge_hw_device_unmask_all()
*/
void vxge_hw_device_mask_all(struct __vxge_hw_device *hldev)
{
u64 val64;
val64 = VXGE_HW_TITAN_MASK_ALL_INT_ALARM |
VXGE_HW_TITAN_MASK_ALL_INT_TRAFFIC;
__vxge_hw_pio_mem_write32_upper((u32)vxge_bVALn(val64, 0, 32),
&hldev->common_reg->titan_mask_all_int);
}
/**
* vxge_hw_device_unmask_all - Unmask all device interrupts.
* @hldev: HW device handle.
*
* Unmask all device interrupts.
*
* See also: vxge_hw_device_mask_all()
*/
void vxge_hw_device_unmask_all(struct __vxge_hw_device *hldev)
{
u64 val64 = 0;
if (hldev->config.intr_mode == VXGE_HW_INTR_MODE_IRQLINE)
val64 = VXGE_HW_TITAN_MASK_ALL_INT_TRAFFIC;
__vxge_hw_pio_mem_write32_upper((u32)vxge_bVALn(val64, 0, 32),
&hldev->common_reg->titan_mask_all_int);
}
/**
* vxge_hw_device_flush_io - Flush io writes.
* @hldev: HW device handle.
*
* The function performs a read operation to flush io writes.
*
* Returns: void
*/
void vxge_hw_device_flush_io(struct __vxge_hw_device *hldev)
{
readl(&hldev->common_reg->titan_general_int_status);
}
/**
* __vxge_hw_device_handle_error - Handle error
* @hldev: HW device
* @vp_id: Vpath Id
* @type: Error type. Please see enum vxge_hw_event{}
*
* Handle error.
*/
static enum vxge_hw_status
__vxge_hw_device_handle_error(struct __vxge_hw_device *hldev, u32 vp_id,
enum vxge_hw_event type)
{
switch (type) {
case VXGE_HW_EVENT_UNKNOWN:
break;
case VXGE_HW_EVENT_RESET_START:
case VXGE_HW_EVENT_RESET_COMPLETE:
case VXGE_HW_EVENT_LINK_DOWN:
case VXGE_HW_EVENT_LINK_UP:
goto out;
case VXGE_HW_EVENT_ALARM_CLEARED:
goto out;
case VXGE_HW_EVENT_ECCERR:
case VXGE_HW_EVENT_MRPCIM_ECCERR:
goto out;
case VXGE_HW_EVENT_FIFO_ERR:
case VXGE_HW_EVENT_VPATH_ERR:
case VXGE_HW_EVENT_CRITICAL_ERR:
case VXGE_HW_EVENT_SERR:
break;
case VXGE_HW_EVENT_SRPCIM_SERR:
case VXGE_HW_EVENT_MRPCIM_SERR:
goto out;
case VXGE_HW_EVENT_SLOT_FREEZE:
break;
default:
vxge_assert(0);
goto out;
}
/* notify driver */
if (hldev->uld_callbacks->crit_err)
hldev->uld_callbacks->crit_err(hldev,
type, vp_id);
out:
return VXGE_HW_OK;
}
/*
* __vxge_hw_device_handle_link_down_ind
* @hldev: HW device handle.
*
* Link down indication handler. The function is invoked by HW when
* Titan indicates that the link is down.
*/
static enum vxge_hw_status
__vxge_hw_device_handle_link_down_ind(struct __vxge_hw_device *hldev)
{
/*
* If the previous link state is not down, return.
*/
if (hldev->link_state == VXGE_HW_LINK_DOWN)
goto exit;
hldev->link_state = VXGE_HW_LINK_DOWN;
/* notify driver */
if (hldev->uld_callbacks->link_down)
hldev->uld_callbacks->link_down(hldev);
exit:
return VXGE_HW_OK;
}
/*
* __vxge_hw_device_handle_link_up_ind
* @hldev: HW device handle.
*
* Link up indication handler. The function is invoked by HW when
* Titan indicates that the link is up for programmable amount of time.
*/
static enum vxge_hw_status
__vxge_hw_device_handle_link_up_ind(struct __vxge_hw_device *hldev)
{
/*
* If the previous link state is not down, return.
*/
if (hldev->link_state == VXGE_HW_LINK_UP)
goto exit;
hldev->link_state = VXGE_HW_LINK_UP;
/* notify driver */
if (hldev->uld_callbacks->link_up)
hldev->uld_callbacks->link_up(hldev);
exit:
return VXGE_HW_OK;
}
/*
* __vxge_hw_vpath_alarm_process - Process Alarms.
* @vpath: Virtual Path.
* @skip_alarms: Do not clear the alarms
*
* Process vpath alarms.
*
*/
static enum vxge_hw_status
__vxge_hw_vpath_alarm_process(struct __vxge_hw_virtualpath *vpath,
u32 skip_alarms)
{
u64 val64;
u64 alarm_status;
u64 pic_status;
struct __vxge_hw_device *hldev = NULL;
enum vxge_hw_event alarm_event = VXGE_HW_EVENT_UNKNOWN;
u64 mask64;
struct vxge_hw_vpath_stats_sw_info *sw_stats;
struct vxge_hw_vpath_reg __iomem *vp_reg;
if (vpath == NULL) {
alarm_event = VXGE_HW_SET_LEVEL(VXGE_HW_EVENT_UNKNOWN,
alarm_event);
goto out2;
}
hldev = vpath->hldev;
vp_reg = vpath->vp_reg;
alarm_status = readq(&vp_reg->vpath_general_int_status);
if (alarm_status == VXGE_HW_ALL_FOXES) {
alarm_event = VXGE_HW_SET_LEVEL(VXGE_HW_EVENT_SLOT_FREEZE,
alarm_event);
goto out;
}
sw_stats = vpath->sw_stats;
if (alarm_status & ~(
VXGE_HW_VPATH_GENERAL_INT_STATUS_PIC_INT |
VXGE_HW_VPATH_GENERAL_INT_STATUS_PCI_INT |
VXGE_HW_VPATH_GENERAL_INT_STATUS_WRDMA_INT |
VXGE_HW_VPATH_GENERAL_INT_STATUS_XMAC_INT)) {
sw_stats->error_stats.unknown_alarms++;
alarm_event = VXGE_HW_SET_LEVEL(VXGE_HW_EVENT_UNKNOWN,
alarm_event);
goto out;
}
if (alarm_status & VXGE_HW_VPATH_GENERAL_INT_STATUS_XMAC_INT) {
val64 = readq(&vp_reg->xgmac_vp_int_status);
if (val64 &
VXGE_HW_XGMAC_VP_INT_STATUS_ASIC_NTWK_VP_ERR_ASIC_NTWK_VP_INT) {
val64 = readq(&vp_reg->asic_ntwk_vp_err_reg);
if (((val64 &
VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_FLT) &&
(!(val64 &
VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_OK))) ||
((val64 &
VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_FLT_OCCURR) &&
(!(val64 &
VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_OK_OCCURR)
))) {
sw_stats->error_stats.network_sustained_fault++;
writeq(
VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_FLT,
&vp_reg->asic_ntwk_vp_err_mask);
__vxge_hw_device_handle_link_down_ind(hldev);
alarm_event = VXGE_HW_SET_LEVEL(
VXGE_HW_EVENT_LINK_DOWN, alarm_event);
}
if (((val64 &
VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_OK) &&
(!(val64 &
VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_FLT))) ||
((val64 &
VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_OK_OCCURR) &&
(!(val64 &
VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_FLT_OCCURR)
))) {
sw_stats->error_stats.network_sustained_ok++;
writeq(
VXGE_HW_ASIC_NW_VP_ERR_REG_XMACJ_STN_OK,
&vp_reg->asic_ntwk_vp_err_mask);
__vxge_hw_device_handle_link_up_ind(hldev);
alarm_event = VXGE_HW_SET_LEVEL(
VXGE_HW_EVENT_LINK_UP, alarm_event);
}
writeq(VXGE_HW_INTR_MASK_ALL,
&vp_reg->asic_ntwk_vp_err_reg);
alarm_event = VXGE_HW_SET_LEVEL(
VXGE_HW_EVENT_ALARM_CLEARED, alarm_event);
if (skip_alarms)
return VXGE_HW_OK;
}
}
if (alarm_status & VXGE_HW_VPATH_GENERAL_INT_STATUS_PIC_INT) {
pic_status = readq(&vp_reg->vpath_ppif_int_status);
if (pic_status &
VXGE_HW_VPATH_PPIF_INT_STATUS_GENERAL_ERRORS_GENERAL_INT) {
val64 = readq(&vp_reg->general_errors_reg);
mask64 = readq(&vp_reg->general_errors_mask);
if ((val64 &
VXGE_HW_GENERAL_ERRORS_REG_INI_SERR_DET) &
~mask64) {
sw_stats->error_stats.ini_serr_det++;
alarm_event = VXGE_HW_SET_LEVEL(
VXGE_HW_EVENT_SERR, alarm_event);
}
if ((val64 &
VXGE_HW_GENERAL_ERRORS_REG_DBLGEN_FIFO0_OVRFLOW) &
~mask64) {
sw_stats->error_stats.dblgen_fifo0_overflow++;
alarm_event = VXGE_HW_SET_LEVEL(
VXGE_HW_EVENT_FIFO_ERR, alarm_event);
}
if ((val64 &
VXGE_HW_GENERAL_ERRORS_REG_STATSB_PIF_CHAIN_ERR) &
~mask64)
sw_stats->error_stats.statsb_pif_chain_error++;
if ((val64 &
VXGE_HW_GENERAL_ERRORS_REG_STATSB_DROP_TIMEOUT_REQ) &
~mask64)
sw_stats->error_stats.statsb_drop_timeout++;
if ((val64 &
VXGE_HW_GENERAL_ERRORS_REG_TGT_ILLEGAL_ACCESS) &
~mask64)
sw_stats->error_stats.target_illegal_access++;
if (!skip_alarms) {
writeq(VXGE_HW_INTR_MASK_ALL,
&vp_reg->general_errors_reg);
alarm_event = VXGE_HW_SET_LEVEL(
VXGE_HW_EVENT_ALARM_CLEARED,
alarm_event);
}
}
if (pic_status &
VXGE_HW_VPATH_PPIF_INT_STATUS_KDFCCTL_ERRORS_KDFCCTL_INT) {
val64 = readq(&vp_reg->kdfcctl_errors_reg);
mask64 = readq(&vp_reg->kdfcctl_errors_mask);
if ((val64 &
VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO0_OVRWR) &
~mask64) {
sw_stats->error_stats.kdfcctl_fifo0_overwrite++;
alarm_event = VXGE_HW_SET_LEVEL(
VXGE_HW_EVENT_FIFO_ERR,
alarm_event);
}
if ((val64 &
VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO0_POISON) &
~mask64) {
sw_stats->error_stats.kdfcctl_fifo0_poison++;
alarm_event = VXGE_HW_SET_LEVEL(
VXGE_HW_EVENT_FIFO_ERR,
alarm_event);
}
if ((val64 &
VXGE_HW_KDFCCTL_ERRORS_REG_KDFCCTL_FIFO0_DMA_ERR) &
~mask64) {
sw_stats->error_stats.kdfcctl_fifo0_dma_error++;
alarm_event = VXGE_HW_SET_LEVEL(
VXGE_HW_EVENT_FIFO_ERR,
alarm_event);
}
if (!skip_alarms) {
writeq(VXGE_HW_INTR_MASK_ALL,
&vp_reg->kdfcctl_errors_reg);
alarm_event = VXGE_HW_SET_LEVEL(
VXGE_HW_EVENT_ALARM_CLEARED,
alarm_event);
}
}
}
if (alarm_status & VXGE_HW_VPATH_GENERAL_INT_STATUS_WRDMA_INT) {
val64 = readq(&vp_reg->wrdma_alarm_status);
if (val64 & VXGE_HW_WRDMA_ALARM_STATUS_PRC_ALARM_PRC_INT) {
val64 = readq(&vp_reg->prc_alarm_reg);
mask64 = readq(&vp_reg->prc_alarm_mask);
if ((val64 & VXGE_HW_PRC_ALARM_REG_PRC_RING_BUMP)&
~mask64)
sw_stats->error_stats.prc_ring_bumps++;
if ((val64 & VXGE_HW_PRC_ALARM_REG_PRC_RXDCM_SC_ERR) &
~mask64) {
sw_stats->error_stats.prc_rxdcm_sc_err++;
alarm_event = VXGE_HW_SET_LEVEL(
VXGE_HW_EVENT_VPATH_ERR,
alarm_event);
}
if ((val64 & VXGE_HW_PRC_ALARM_REG_PRC_RXDCM_SC_ABORT)
& ~mask64) {
sw_stats->error_stats.prc_rxdcm_sc_abort++;
alarm_event = VXGE_HW_SET_LEVEL(
VXGE_HW_EVENT_VPATH_ERR,
alarm_event);
}
if ((val64 & VXGE_HW_PRC_ALARM_REG_PRC_QUANTA_SIZE_ERR)
& ~mask64) {
sw_stats->error_stats.prc_quanta_size_err++;
alarm_event = VXGE_HW_SET_LEVEL(
VXGE_HW_EVENT_VPATH_ERR,
alarm_event);
}
if (!skip_alarms) {
writeq(VXGE_HW_INTR_MASK_ALL,
&vp_reg->prc_alarm_reg);
alarm_event = VXGE_HW_SET_LEVEL(
VXGE_HW_EVENT_ALARM_CLEARED,
alarm_event);
}
}
}
out:
hldev->stats.sw_dev_err_stats.vpath_alarms++;
out2:
if ((alarm_event == VXGE_HW_EVENT_ALARM_CLEARED) ||
(alarm_event == VXGE_HW_EVENT_UNKNOWN))
return VXGE_HW_OK;
__vxge_hw_device_handle_error(hldev, vpath->vp_id, alarm_event);
if (alarm_event == VXGE_HW_EVENT_SERR)
return VXGE_HW_ERR_CRITICAL;
return (alarm_event == VXGE_HW_EVENT_SLOT_FREEZE) ?
VXGE_HW_ERR_SLOT_FREEZE :
(alarm_event == VXGE_HW_EVENT_FIFO_ERR) ? VXGE_HW_ERR_FIFO :
VXGE_HW_ERR_VPATH;
}
/**
* vxge_hw_device_begin_irq - Begin IRQ processing.
* @hldev: HW device handle.
* @skip_alarms: Do not clear the alarms
* @reason: "Reason" for the interrupt, the value of Titan's
* general_int_status register.
*
* The function performs two actions, It first checks whether (shared IRQ) the
* interrupt was raised by the device. Next, it masks the device interrupts.
*
* Note:
* vxge_hw_device_begin_irq() does not flush MMIO writes through the
* bridge. Therefore, two back-to-back interrupts are potentially possible.
*
* Returns: 0, if the interrupt is not "ours" (note that in this case the
* device remain enabled).
* Otherwise, vxge_hw_device_begin_irq() returns 64bit general adapter
* status.
*/
enum vxge_hw_status vxge_hw_device_begin_irq(struct __vxge_hw_device *hldev,
u32 skip_alarms, u64 *reason)
{
u32 i;
u64 val64;
u64 adapter_status;
u64 vpath_mask;
enum vxge_hw_status ret = VXGE_HW_OK;
val64 = readq(&hldev->common_reg->titan_general_int_status);
if (unlikely(!val64)) {
/* not Titan interrupt */
*reason = 0;
ret = VXGE_HW_ERR_WRONG_IRQ;
goto exit;
}
if (unlikely(val64 == VXGE_HW_ALL_FOXES)) {
adapter_status = readq(&hldev->common_reg->adapter_status);
if (adapter_status == VXGE_HW_ALL_FOXES) {
__vxge_hw_device_handle_error(hldev,
NULL_VPID, VXGE_HW_EVENT_SLOT_FREEZE);
*reason = 0;
ret = VXGE_HW_ERR_SLOT_FREEZE;
goto exit;
}
}
hldev->stats.sw_dev_info_stats.total_intr_cnt++;
*reason = val64;
vpath_mask = hldev->vpaths_deployed >>
(64 - VXGE_HW_MAX_VIRTUAL_PATHS);
if (val64 &
VXGE_HW_TITAN_GENERAL_INT_STATUS_VPATH_TRAFFIC_INT(vpath_mask)) {
hldev->stats.sw_dev_info_stats.traffic_intr_cnt++;
return VXGE_HW_OK;
}
hldev->stats.sw_dev_info_stats.not_traffic_intr_cnt++;
if (unlikely(val64 &
VXGE_HW_TITAN_GENERAL_INT_STATUS_VPATH_ALARM_INT)) {
enum vxge_hw_status error_level = VXGE_HW_OK;
hldev->stats.sw_dev_err_stats.vpath_alarms++;
for (i = 0; i < VXGE_HW_MAX_VIRTUAL_PATHS; i++) {
if (!(hldev->vpaths_deployed & vxge_mBIT(i)))
continue;
ret = __vxge_hw_vpath_alarm_process(
&hldev->virtual_paths[i], skip_alarms);
error_level = VXGE_HW_SET_LEVEL(ret, error_level);
if (unlikely((ret == VXGE_HW_ERR_CRITICAL) ||
(ret == VXGE_HW_ERR_SLOT_FREEZE)))
break;
}
ret = error_level;
}
exit:
return ret;
}
/**
* vxge_hw_device_clear_tx_rx - Acknowledge (that is, clear) the
* condition that has caused the Tx and RX interrupt.
* @hldev: HW device.
*
* Acknowledge (that is, clear) the condition that has caused
* the Tx and Rx interrupt.
* See also: vxge_hw_device_begin_irq(),
* vxge_hw_device_mask_tx_rx(), vxge_hw_device_unmask_tx_rx().
*/
void vxge_hw_device_clear_tx_rx(struct __vxge_hw_device *hldev)
{
if ((hldev->tim_int_mask0[VXGE_HW_VPATH_INTR_TX] != 0) ||
(hldev->tim_int_mask0[VXGE_HW_VPATH_INTR_RX] != 0)) {
writeq((hldev->tim_int_mask0[VXGE_HW_VPATH_INTR_TX] |
hldev->tim_int_mask0[VXGE_HW_VPATH_INTR_RX]),
&hldev->common_reg->tim_int_status0);
}
if ((hldev->tim_int_mask1[VXGE_HW_VPATH_INTR_TX] != 0) ||
(hldev->tim_int_mask1[VXGE_HW_VPATH_INTR_RX] != 0)) {
__vxge_hw_pio_mem_write32_upper(
(hldev->tim_int_mask1[VXGE_HW_VPATH_INTR_TX] |
hldev->tim_int_mask1[VXGE_HW_VPATH_INTR_RX]),
&hldev->common_reg->tim_int_status1);
}
}
/*
* vxge_hw_channel_dtr_alloc - Allocate a dtr from the channel
* @channel: Channel
* @dtrh: Buffer to return the DTR pointer
*
* Allocates a dtr from the reserve array. If the reserve array is empty,
* it swaps the reserve and free arrays.
*
*/
static enum vxge_hw_status
vxge_hw_channel_dtr_alloc(struct __vxge_hw_channel *channel, void **dtrh)
{
if (channel->reserve_ptr - channel->reserve_top > 0) {
_alloc_after_swap:
*dtrh = channel->reserve_arr[--channel->reserve_ptr];
return VXGE_HW_OK;
}
/* switch between empty and full arrays */
/* the idea behind such a design is that by having free and reserved
* arrays separated we basically separated irq and non-irq parts.
* i.e. no additional lock need to be done when we free a resource */
if (channel->length - channel->free_ptr > 0) {
swap(channel->reserve_arr, channel->free_arr);
channel->reserve_ptr = channel->length;
channel->reserve_top = channel->free_ptr;
channel->free_ptr = channel->length;
channel->stats->reserve_free_swaps_cnt++;
goto _alloc_after_swap;
}
channel->stats->full_cnt++;
*dtrh = NULL;
return VXGE_HW_INF_OUT_OF_DESCRIPTORS;
}
/*
* vxge_hw_channel_dtr_post - Post a dtr to the channel
* @channelh: Channel
* @dtrh: DTR pointer
*
* Posts a dtr to work array.
*
*/
static void
vxge_hw_channel_dtr_post(struct __vxge_hw_channel *channel, void *dtrh)
{
vxge_assert(channel->work_arr[channel->post_index] == NULL);
channel->work_arr[channel->post_index++] = dtrh;
/* wrap-around */
if (channel->post_index == channel->length)
channel->post_index = 0;
}
/*
* vxge_hw_channel_dtr_try_complete - Returns next completed dtr
* @channel: Channel
* @dtr: Buffer to return the next completed DTR pointer
*
* Returns the next completed dtr with out removing it from work array
*
*/
void
vxge_hw_channel_dtr_try_complete(struct __vxge_hw_channel *channel, void **dtrh)
{
vxge_assert(channel->compl_index < channel->length);
*dtrh = channel->work_arr[channel->compl_index];
prefetch(*dtrh);
}
/*
* vxge_hw_channel_dtr_complete - Removes next completed dtr from the work array
* @channel: Channel handle
*
* Removes the next completed dtr from work array
*
*/
void vxge_hw_channel_dtr_complete(struct __vxge_hw_channel *channel)
{
channel->work_arr[channel->compl_index] = NULL;
/* wrap-around */
if (++channel->compl_index == channel->length)
channel->compl_index = 0;
channel->stats->total_compl_cnt++;
}
/*
* vxge_hw_channel_dtr_free - Frees a dtr
* @channel: Channel handle
* @dtr: DTR pointer
*
* Returns the dtr to free array
*
*/
void vxge_hw_channel_dtr_free(struct __vxge_hw_channel *channel, void *dtrh)
{
channel->free_arr[--channel->free_ptr] = dtrh;
}
/*
* vxge_hw_channel_dtr_count
* @channel: Channel handle. Obtained via vxge_hw_channel_open().
*
* Retrieve number of DTRs available. This function can not be called
* from data path. ring_initial_replenishi() is the only user.
*/
int vxge_hw_channel_dtr_count(struct __vxge_hw_channel *channel)
{
return (channel->reserve_ptr - channel->reserve_top) +
(channel->length - channel->free_ptr);
}
/**
* vxge_hw_ring_rxd_reserve - Reserve ring descriptor.
* @ring: Handle to the ring object used for receive
* @rxdh: Reserved descriptor. On success HW fills this "out" parameter
* with a valid handle.
*
* Reserve Rx descriptor for the subsequent filling-in driver
* and posting on the corresponding channel (@channelh)
* via vxge_hw_ring_rxd_post().
*
* Returns: VXGE_HW_OK - success.
* VXGE_HW_INF_OUT_OF_DESCRIPTORS - Currently no descriptors available.
*
*/
enum vxge_hw_status vxge_hw_ring_rxd_reserve(struct __vxge_hw_ring *ring,
void **rxdh)
{
enum vxge_hw_status status;
struct __vxge_hw_channel *channel;
channel = &ring->channel;
status = vxge_hw_channel_dtr_alloc(channel, rxdh);
if (status == VXGE_HW_OK) {
struct vxge_hw_ring_rxd_1 *rxdp =
(struct vxge_hw_ring_rxd_1 *)*rxdh;
rxdp->control_0 = rxdp->control_1 = 0;
}
return status;
}
/**
* vxge_hw_ring_rxd_free - Free descriptor.
* @ring: Handle to the ring object used for receive
* @rxdh: Descriptor handle.
*
* Free the reserved descriptor. This operation is "symmetrical" to
* vxge_hw_ring_rxd_reserve. The "free-ing" completes the descriptor's
* lifecycle.
*
* After free-ing (see vxge_hw_ring_rxd_free()) the descriptor again can
* be:
*
* - reserved (vxge_hw_ring_rxd_reserve);
*
* - posted (vxge_hw_ring_rxd_post);
*
* - completed (vxge_hw_ring_rxd_next_completed);
*
* - and recycled again (vxge_hw_ring_rxd_free).
*
* For alternative state transitions and more details please refer to
* the design doc.
*
*/
void vxge_hw_ring_rxd_free(struct __vxge_hw_ring *ring, void *rxdh)
{
struct __vxge_hw_channel *channel;
channel = &ring->channel;
vxge_hw_channel_dtr_free(channel, rxdh);
}
/**
* vxge_hw_ring_rxd_pre_post - Prepare rxd and post
* @ring: Handle to the ring object used for receive
* @rxdh: Descriptor handle.
*
* This routine prepares a rxd and posts
*/
void vxge_hw_ring_rxd_pre_post(struct __vxge_hw_ring *ring, void *rxdh)
{
struct __vxge_hw_channel *channel;
channel = &ring->channel;
vxge_hw_channel_dtr_post(channel, rxdh);
}
/**
* vxge_hw_ring_rxd_post_post - Process rxd after post.
* @ring: Handle to the ring object used for receive
* @rxdh: Descriptor handle.
*
* Processes rxd after post
*/
void vxge_hw_ring_rxd_post_post(struct __vxge_hw_ring *ring, void *rxdh)
{
struct vxge_hw_ring_rxd_1 *rxdp = (struct vxge_hw_ring_rxd_1 *)rxdh;
rxdp->control_0 = VXGE_HW_RING_RXD_LIST_OWN_ADAPTER;
if (ring->stats->common_stats.usage_cnt > 0)
ring->stats->common_stats.usage_cnt--;
}
/**
* vxge_hw_ring_rxd_post - Post descriptor on the ring.
* @ring: Handle to the ring object used for receive
* @rxdh: Descriptor obtained via vxge_hw_ring_rxd_reserve().
*
* Post descriptor on the ring.
* Prior to posting the descriptor should be filled in accordance with
* Host/Titan interface specification for a given service (LL, etc.).
*
*/
void vxge_hw_ring_rxd_post(struct __vxge_hw_ring *ring, void *rxdh)
{
struct vxge_hw_ring_rxd_1 *rxdp = (struct vxge_hw_ring_rxd_1 *)rxdh;
struct __vxge_hw_channel *channel;
channel = &ring->channel;
wmb();
rxdp->control_0 = VXGE_HW_RING_RXD_LIST_OWN_ADAPTER;
vxge_hw_channel_dtr_post(channel, rxdh);
if (ring->stats->common_stats.usage_cnt > 0)
ring->stats->common_stats.usage_cnt--;
}
/**
* vxge_hw_ring_rxd_post_post_wmb - Process rxd after post with memory barrier.
* @ring: Handle to the ring object used for receive
* @rxdh: Descriptor handle.
*
* Processes rxd after post with memory barrier.
*/
void vxge_hw_ring_rxd_post_post_wmb(struct __vxge_hw_ring *ring, void *rxdh)
{
wmb();
vxge_hw_ring_rxd_post_post(ring, rxdh);
}
/**
* vxge_hw_ring_rxd_next_completed - Get the _next_ completed descriptor.
* @ring: Handle to the ring object used for receive
* @rxdh: Descriptor handle. Returned by HW.
* @t_code: Transfer code, as per Titan User Guide,
* Receive Descriptor Format. Returned by HW.
*
* Retrieve the _next_ completed descriptor.
* HW uses ring callback (*vxge_hw_ring_callback_f) to notifiy
* driver of new completed descriptors. After that
* the driver can use vxge_hw_ring_rxd_next_completed to retrieve the rest
* completions (the very first completion is passed by HW via
* vxge_hw_ring_callback_f).
*
* Implementation-wise, the driver is free to call
* vxge_hw_ring_rxd_next_completed either immediately from inside the
* ring callback, or in a deferred fashion and separate (from HW)
* context.
*
* Non-zero @t_code means failure to fill-in receive buffer(s)
* of the descriptor.
* For instance, parity error detected during the data transfer.
* In this case Titan will complete the descriptor and indicate
* for the host that the received data is not to be used.
* For details please refer to Titan User Guide.
*
* Returns: VXGE_HW_OK - success.
* VXGE_HW_INF_NO_MORE_COMPLETED_DESCRIPTORS - No completed descriptors
* are currently available for processing.
*
* See also: vxge_hw_ring_callback_f{},
* vxge_hw_fifo_rxd_next_completed(), enum vxge_hw_status{}.
*/
enum vxge_hw_status vxge_hw_ring_rxd_next_completed(
struct __vxge_hw_ring *ring, void **rxdh, u8 *t_code)
{
struct __vxge_hw_channel *channel;
struct vxge_hw_ring_rxd_1 *rxdp;
enum vxge_hw_status status = VXGE_HW_OK;
u64 control_0, own;
channel = &ring->channel;
vxge_hw_channel_dtr_try_complete(channel, rxdh);
rxdp = *rxdh;
if (rxdp == NULL) {
status = VXGE_HW_INF_NO_MORE_COMPLETED_DESCRIPTORS;
goto exit;
}
control_0 = rxdp->control_0;
own = control_0 & VXGE_HW_RING_RXD_LIST_OWN_ADAPTER;
*t_code = (u8)VXGE_HW_RING_RXD_T_CODE_GET(control_0);
/* check whether it is not the end */
if (!own || *t_code == VXGE_HW_RING_T_CODE_FRM_DROP) {
vxge_assert((rxdp)->host_control !=
0);
++ring->cmpl_cnt;
vxge_hw_channel_dtr_complete(channel);
vxge_assert(*t_code != VXGE_HW_RING_RXD_T_CODE_UNUSED);
ring->stats->common_stats.usage_cnt++;
if (ring->stats->common_stats.usage_max <
ring->stats->common_stats.usage_cnt)
ring->stats->common_stats.usage_max =
ring->stats->common_stats.usage_cnt;
status = VXGE_HW_OK;
goto exit;
}
/* reset it. since we don't want to return
* garbage to the driver */
*rxdh = NULL;
status = VXGE_HW_INF_NO_MORE_COMPLETED_DESCRIPTORS;
exit:
return status;
}
/**
* vxge_hw_ring_handle_tcode - Handle transfer code.
* @ring: Handle to the ring object used for receive
* @rxdh: Descriptor handle.
* @t_code: One of the enumerated (and documented in the Titan user guide)
* "transfer codes".
*
* Handle descriptor's transfer code. The latter comes with each completed
* descriptor.
*
* Returns: one of the enum vxge_hw_status{} enumerated types.
* VXGE_HW_OK - for success.
* VXGE_HW_ERR_CRITICAL - when encounters critical error.
*/
enum vxge_hw_status vxge_hw_ring_handle_tcode(
struct __vxge_hw_ring *ring, void *rxdh, u8 t_code)
{
enum vxge_hw_status status = VXGE_HW_OK;
/* If the t_code is not supported and if the
* t_code is other than 0x5 (unparseable packet
* such as unknown UPV6 header), Drop it !!!
*/
if (t_code == VXGE_HW_RING_T_CODE_OK ||
t_code == VXGE_HW_RING_T_CODE_L3_PKT_ERR) {
status = VXGE_HW_OK;
goto exit;
}
if (t_code > VXGE_HW_RING_T_CODE_MULTI_ERR) {
status = VXGE_HW_ERR_INVALID_TCODE;
goto exit;
}
ring->stats->rxd_t_code_err_cnt[t_code]++;
exit:
return status;
}
/**
* __vxge_hw_non_offload_db_post - Post non offload doorbell
*
* @fifo: fifohandle
* @txdl_ptr: The starting location of the TxDL in host memory
* @num_txds: The highest TxD in this TxDL (0 to 255 means 1 to 256)
* @no_snoop: No snoop flags
*
* This function posts a non-offload doorbell to doorbell FIFO
*
*/
static void __vxge_hw_non_offload_db_post(struct __vxge_hw_fifo *fifo,
u64 txdl_ptr, u32 num_txds, u32 no_snoop)
{
writeq(VXGE_HW_NODBW_TYPE(VXGE_HW_NODBW_TYPE_NODBW) |
VXGE_HW_NODBW_LAST_TXD_NUMBER(num_txds) |
VXGE_HW_NODBW_GET_NO_SNOOP(no_snoop),
&fifo->nofl_db->control_0);
writeq(txdl_ptr, &fifo->nofl_db->txdl_ptr);
}
/**
* vxge_hw_fifo_free_txdl_count_get - returns the number of txdls available in
* the fifo
* @fifoh: Handle to the fifo object used for non offload send
*/
u32 vxge_hw_fifo_free_txdl_count_get(struct __vxge_hw_fifo *fifoh)
{
return vxge_hw_channel_dtr_count(&fifoh->channel);
}
/**
* vxge_hw_fifo_txdl_reserve - Reserve fifo descriptor.
* @fifo: Handle to the fifo object used for non offload send
* @txdlh: Reserved descriptor. On success HW fills this "out" parameter
* with a valid handle.
* @txdl_priv: Buffer to return the pointer to per txdl space
*
* Reserve a single TxDL (that is, fifo descriptor)
* for the subsequent filling-in by driver)
* and posting on the corresponding channel (@channelh)
* via vxge_hw_fifo_txdl_post().
*
* Note: it is the responsibility of driver to reserve multiple descriptors
* for lengthy (e.g., LSO) transmit operation. A single fifo descriptor
* carries up to configured number (fifo.max_frags) of contiguous buffers.
*
* Returns: VXGE_HW_OK - success;
* VXGE_HW_INF_OUT_OF_DESCRIPTORS - Currently no descriptors available
*
*/
enum vxge_hw_status vxge_hw_fifo_txdl_reserve(
struct __vxge_hw_fifo *fifo,
void **txdlh, void **txdl_priv)
{
struct __vxge_hw_channel *channel;
enum vxge_hw_status status;
int i;
channel = &fifo->channel;
status = vxge_hw_channel_dtr_alloc(channel, txdlh);
if (status == VXGE_HW_OK) {
struct vxge_hw_fifo_txd *txdp =
(struct vxge_hw_fifo_txd *)*txdlh;
struct __vxge_hw_fifo_txdl_priv *priv;
priv = __vxge_hw_fifo_txdl_priv(fifo, txdp);
/* reset the TxDL's private */
priv->align_dma_offset = 0;
priv->align_vaddr_start = priv->align_vaddr;
priv->align_used_frags = 0;
priv->frags = 0;
priv->alloc_frags = fifo->config->max_frags;
priv->next_txdl_priv = NULL;
*txdl_priv = (void *)(size_t)txdp->host_control;
for (i = 0; i < fifo->config->max_frags; i++) {
txdp = ((struct vxge_hw_fifo_txd *)*txdlh) + i;
txdp->control_0 = txdp->control_1 = 0;
}
}
return status;
}
/**
* vxge_hw_fifo_txdl_buffer_set - Set transmit buffer pointer in the
* descriptor.
* @fifo: Handle to the fifo object used for non offload send
* @txdlh: Descriptor handle.
* @frag_idx: Index of the data buffer in the caller's scatter-gather list
* (of buffers).
* @dma_pointer: DMA address of the data buffer referenced by @frag_idx.
* @size: Size of the data buffer (in bytes).
*
* This API is part of the preparation of the transmit descriptor for posting
* (via vxge_hw_fifo_txdl_post()). The related "preparation" APIs include
* vxge_hw_fifo_txdl_mss_set() and vxge_hw_fifo_txdl_cksum_set_bits().
* All three APIs fill in the fields of the fifo descriptor,
* in accordance with the Titan specification.
*
*/
void vxge_hw_fifo_txdl_buffer_set(struct __vxge_hw_fifo *fifo,
void *txdlh, u32 frag_idx,
dma_addr_t dma_pointer, u32 size)
{
struct __vxge_hw_fifo_txdl_priv *txdl_priv;
struct vxge_hw_fifo_txd *txdp, *txdp_last;
txdl_priv = __vxge_hw_fifo_txdl_priv(fifo, txdlh);
txdp = (struct vxge_hw_fifo_txd *)txdlh + txdl_priv->frags;
if (frag_idx != 0)
txdp->control_0 = txdp->control_1 = 0;
else {
txdp->control_0 |= VXGE_HW_FIFO_TXD_GATHER_CODE(
VXGE_HW_FIFO_TXD_GATHER_CODE_FIRST);
txdp->control_1 |= fifo->interrupt_type;
txdp->control_1 |= VXGE_HW_FIFO_TXD_INT_NUMBER(
fifo->tx_intr_num);
if (txdl_priv->frags) {
txdp_last = (struct vxge_hw_fifo_txd *)txdlh +
(txdl_priv->frags - 1);
txdp_last->control_0 |= VXGE_HW_FIFO_TXD_GATHER_CODE(
VXGE_HW_FIFO_TXD_GATHER_CODE_LAST);
}
}
vxge_assert(frag_idx < txdl_priv->alloc_frags);
txdp->buffer_pointer = (u64)dma_pointer;
txdp->control_0 |= VXGE_HW_FIFO_TXD_BUFFER_SIZE(size);
fifo->stats->total_buffers++;
txdl_priv->frags++;
}
/**
* vxge_hw_fifo_txdl_post - Post descriptor on the fifo channel.
* @fifo: Handle to the fifo object used for non offload send
* @txdlh: Descriptor obtained via vxge_hw_fifo_txdl_reserve()
*
* Post descriptor on the 'fifo' type channel for transmission.
* Prior to posting the descriptor should be filled in accordance with
* Host/Titan interface specification for a given service (LL, etc.).
*
*/
void vxge_hw_fifo_txdl_post(struct __vxge_hw_fifo *fifo, void *txdlh)
{
struct __vxge_hw_fifo_txdl_priv *txdl_priv;
struct vxge_hw_fifo_txd *txdp_last;
struct vxge_hw_fifo_txd *txdp_first;
txdl_priv = __vxge_hw_fifo_txdl_priv(fifo, txdlh);
txdp_first = txdlh;
txdp_last = (struct vxge_hw_fifo_txd *)txdlh + (txdl_priv->frags - 1);
txdp_last->control_0 |=
VXGE_HW_FIFO_TXD_GATHER_CODE(VXGE_HW_FIFO_TXD_GATHER_CODE_LAST);
txdp_first->control_0 |= VXGE_HW_FIFO_TXD_LIST_OWN_ADAPTER;
vxge_hw_channel_dtr_post(&fifo->channel, txdlh);
__vxge_hw_non_offload_db_post(fifo,
(u64)txdl_priv->dma_addr,
txdl_priv->frags - 1,
fifo->no_snoop_bits);
fifo->stats->total_posts++;
fifo->stats->common_stats.usage_cnt++;
if (fifo->stats->common_stats.usage_max <
fifo->stats->common_stats.usage_cnt)
fifo->stats->common_stats.usage_max =
fifo->stats->common_stats.usage_cnt;
}
/**
* vxge_hw_fifo_txdl_next_completed - Retrieve next completed descriptor.
* @fifo: Handle to the fifo object used for non offload send
* @txdlh: Descriptor handle. Returned by HW.
* @t_code: Transfer code, as per Titan User Guide,
* Transmit Descriptor Format.
* Returned by HW.
*
* Retrieve the _next_ completed descriptor.
* HW uses channel callback (*vxge_hw_channel_callback_f) to notifiy
* driver of new completed descriptors. After that
* the driver can use vxge_hw_fifo_txdl_next_completed to retrieve the rest
* completions (the very first completion is passed by HW via
* vxge_hw_channel_callback_f).
*
* Implementation-wise, the driver is free to call
* vxge_hw_fifo_txdl_next_completed either immediately from inside the
* channel callback, or in a deferred fashion and separate (from HW)
* context.
*
* Non-zero @t_code means failure to process the descriptor.
* The failure could happen, for instance, when the link is
* down, in which case Titan completes the descriptor because it
* is not able to send the data out.
*
* For details please refer to Titan User Guide.
*
* Returns: VXGE_HW_OK - success.
* VXGE_HW_INF_NO_MORE_COMPLETED_DESCRIPTORS - No completed descriptors
* are currently available for processing.
*
*/
enum vxge_hw_status vxge_hw_fifo_txdl_next_completed(
struct __vxge_hw_fifo *fifo, void **txdlh,
enum vxge_hw_fifo_tcode *t_code)
{
struct __vxge_hw_channel *channel;
struct vxge_hw_fifo_txd *txdp;
enum vxge_hw_status status = VXGE_HW_OK;
channel = &fifo->channel;
vxge_hw_channel_dtr_try_complete(channel, txdlh);
txdp = *txdlh;
if (txdp == NULL) {
status = VXGE_HW_INF_NO_MORE_COMPLETED_DESCRIPTORS;
goto exit;
}
/* check whether host owns it */
if (!(txdp->control_0 & VXGE_HW_FIFO_TXD_LIST_OWN_ADAPTER)) {
vxge_assert(txdp->host_control != 0);
vxge_hw_channel_dtr_complete(channel);
*t_code = (u8)VXGE_HW_FIFO_TXD_T_CODE_GET(txdp->control_0);
if (fifo->stats->common_stats.usage_cnt > 0)
fifo->stats->common_stats.usage_cnt--;
status = VXGE_HW_OK;
goto exit;
}
/* no more completions */
*txdlh = NULL;
status = VXGE_HW_INF_NO_MORE_COMPLETED_DESCRIPTORS;
exit:
return status;
}
/**
* vxge_hw_fifo_handle_tcode - Handle transfer code.
* @fifo: Handle to the fifo object used for non offload send
* @txdlh: Descriptor handle.
* @t_code: One of the enumerated (and documented in the Titan user guide)
* "transfer codes".
*
* Handle descriptor's transfer code. The latter comes with each completed
* descriptor.
*
* Returns: one of the enum vxge_hw_status{} enumerated types.
* VXGE_HW_OK - for success.
* VXGE_HW_ERR_CRITICAL - when encounters critical error.
*/
enum vxge_hw_status vxge_hw_fifo_handle_tcode(struct __vxge_hw_fifo *fifo,
void *txdlh,
enum vxge_hw_fifo_tcode t_code)
{
enum vxge_hw_status status = VXGE_HW_OK;
if (((t_code & 0x7) < 0) || ((t_code & 0x7) > 0x4)) {
status = VXGE_HW_ERR_INVALID_TCODE;
goto exit;
}
fifo->stats->txd_t_code_err_cnt[t_code]++;
exit:
return status;
}
/**
* vxge_hw_fifo_txdl_free - Free descriptor.
* @fifo: Handle to the fifo object used for non offload send
* @txdlh: Descriptor handle.
*
* Free the reserved descriptor. This operation is "symmetrical" to
* vxge_hw_fifo_txdl_reserve. The "free-ing" completes the descriptor's
* lifecycle.
*
* After free-ing (see vxge_hw_fifo_txdl_free()) the descriptor again can
* be:
*
* - reserved (vxge_hw_fifo_txdl_reserve);
*
* - posted (vxge_hw_fifo_txdl_post);
*
* - completed (vxge_hw_fifo_txdl_next_completed);
*
* - and recycled again (vxge_hw_fifo_txdl_free).
*
* For alternative state transitions and more details please refer to
* the design doc.
*
*/
void vxge_hw_fifo_txdl_free(struct __vxge_hw_fifo *fifo, void *txdlh)
{
struct __vxge_hw_channel *channel;
channel = &fifo->channel;
vxge_hw_channel_dtr_free(channel, txdlh);
}
/**
* vxge_hw_vpath_mac_addr_add - Add the mac address entry for this vpath to MAC address table.
* @vp: Vpath handle.
* @macaddr: MAC address to be added for this vpath into the list
* @macaddr_mask: MAC address mask for macaddr
* @duplicate_mode: Duplicate MAC address add mode. Please see
* enum vxge_hw_vpath_mac_addr_add_mode{}
*
* Adds the given mac address and mac address mask into the list for this
* vpath.
* see also: vxge_hw_vpath_mac_addr_delete, vxge_hw_vpath_mac_addr_get and
* vxge_hw_vpath_mac_addr_get_next
*
*/
enum vxge_hw_status
vxge_hw_vpath_mac_addr_add(
struct __vxge_hw_vpath_handle *vp,
u8 *macaddr,
u8 *macaddr_mask,
enum vxge_hw_vpath_mac_addr_add_mode duplicate_mode)
{
u32 i;
u64 data1 = 0ULL;
u64 data2 = 0ULL;
enum vxge_hw_status status = VXGE_HW_OK;
if (vp == NULL) {
status = VXGE_HW_ERR_INVALID_HANDLE;
goto exit;
}
for (i = 0; i < ETH_ALEN; i++) {
data1 <<= 8;
data1 |= (u8)macaddr[i];
data2 <<= 8;
data2 |= (u8)macaddr_mask[i];
}
switch (duplicate_mode) {
case VXGE_HW_VPATH_MAC_ADDR_ADD_DUPLICATE:
i = 0;
break;
case VXGE_HW_VPATH_MAC_ADDR_DISCARD_DUPLICATE:
i = 1;
break;
case VXGE_HW_VPATH_MAC_ADDR_REPLACE_DUPLICATE:
i = 2;
break;
default:
i = 0;
break;
}
status = __vxge_hw_vpath_rts_table_set(vp,
VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_ADD_ENTRY,
VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_DA,
0,
VXGE_HW_RTS_ACCESS_STEER_DATA0_DA_MAC_ADDR(data1),
VXGE_HW_RTS_ACCESS_STEER_DATA1_DA_MAC_ADDR_MASK(data2)|
VXGE_HW_RTS_ACCESS_STEER_DATA1_DA_MAC_ADDR_MODE(i));
exit:
return status;
}
/**
* vxge_hw_vpath_mac_addr_get - Get the first mac address entry
* @vp: Vpath handle.
* @macaddr: First MAC address entry for this vpath in the list
* @macaddr_mask: MAC address mask for macaddr
*
* Get the first mac address entry for this vpath from MAC address table.
* Return: the first mac address and mac address mask in the list for this
* vpath.
* see also: vxge_hw_vpath_mac_addr_get_next
*
*/
enum vxge_hw_status
vxge_hw_vpath_mac_addr_get(
struct __vxge_hw_vpath_handle *vp,
u8 *macaddr,
u8 *macaddr_mask)
{
u32 i;
u64 data1 = 0ULL;
u64 data2 = 0ULL;
enum vxge_hw_status status = VXGE_HW_OK;
if (vp == NULL) {
status = VXGE_HW_ERR_INVALID_HANDLE;
goto exit;
}
status = __vxge_hw_vpath_rts_table_get(vp,
VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_LIST_FIRST_ENTRY,
VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_DA,
0, &data1, &data2);
if (status != VXGE_HW_OK)
goto exit;
data1 = VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_DA_MAC_ADDR(data1);
data2 = VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_DA_MAC_ADDR_MASK(data2);
for (i = ETH_ALEN; i > 0; i--) {
macaddr[i-1] = (u8)(data1 & 0xFF);
data1 >>= 8;
macaddr_mask[i-1] = (u8)(data2 & 0xFF);
data2 >>= 8;
}
exit:
return status;
}
/**
* vxge_hw_vpath_mac_addr_get_next - Get the next mac address entry
* @vp: Vpath handle.
* @macaddr: Next MAC address entry for this vpath in the list
* @macaddr_mask: MAC address mask for macaddr
*
* Get the next mac address entry for this vpath from MAC address table.
* Return: the next mac address and mac address mask in the list for this
* vpath.
* see also: vxge_hw_vpath_mac_addr_get
*
*/
enum vxge_hw_status
vxge_hw_vpath_mac_addr_get_next(
struct __vxge_hw_vpath_handle *vp,
u8 *macaddr,
u8 *macaddr_mask)
{
u32 i;
u64 data1 = 0ULL;
u64 data2 = 0ULL;
enum vxge_hw_status status = VXGE_HW_OK;
if (vp == NULL) {
status = VXGE_HW_ERR_INVALID_HANDLE;
goto exit;
}
status = __vxge_hw_vpath_rts_table_get(vp,
VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_LIST_NEXT_ENTRY,
VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_DA,
0, &data1, &data2);
if (status != VXGE_HW_OK)
goto exit;
data1 = VXGE_HW_RTS_ACCESS_STEER_DATA0_GET_DA_MAC_ADDR(data1);
data2 = VXGE_HW_RTS_ACCESS_STEER_DATA1_GET_DA_MAC_ADDR_MASK(data2);
for (i = ETH_ALEN; i > 0; i--) {
macaddr[i-1] = (u8)(data1 & 0xFF);
data1 >>= 8;
macaddr_mask[i-1] = (u8)(data2 & 0xFF);
data2 >>= 8;
}
exit:
return status;
}
/**
* vxge_hw_vpath_mac_addr_delete - Delete the mac address entry for this vpath to MAC address table.
* @vp: Vpath handle.
* @macaddr: MAC address to be added for this vpath into the list
* @macaddr_mask: MAC address mask for macaddr
*
* Delete the given mac address and mac address mask into the list for this
* vpath.
* see also: vxge_hw_vpath_mac_addr_add, vxge_hw_vpath_mac_addr_get and
* vxge_hw_vpath_mac_addr_get_next
*
*/
enum vxge_hw_status
vxge_hw_vpath_mac_addr_delete(
struct __vxge_hw_vpath_handle *vp,
u8 *macaddr,
u8 *macaddr_mask)
{
u32 i;
u64 data1 = 0ULL;
u64 data2 = 0ULL;
enum vxge_hw_status status = VXGE_HW_OK;
if (vp == NULL) {
status = VXGE_HW_ERR_INVALID_HANDLE;
goto exit;
}
for (i = 0; i < ETH_ALEN; i++) {
data1 <<= 8;
data1 |= (u8)macaddr[i];
data2 <<= 8;
data2 |= (u8)macaddr_mask[i];
}
status = __vxge_hw_vpath_rts_table_set(vp,
VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_DELETE_ENTRY,
VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_DA,
0,
VXGE_HW_RTS_ACCESS_STEER_DATA0_DA_MAC_ADDR(data1),
VXGE_HW_RTS_ACCESS_STEER_DATA1_DA_MAC_ADDR_MASK(data2));
exit:
return status;
}
/**
* vxge_hw_vpath_vid_add - Add the vlan id entry for this vpath to vlan id table.
* @vp: Vpath handle.
* @vid: vlan id to be added for this vpath into the list
*
* Adds the given vlan id into the list for this vpath.
* see also: vxge_hw_vpath_vid_delete
*
*/
enum vxge_hw_status
vxge_hw_vpath_vid_add(struct __vxge_hw_vpath_handle *vp, u64 vid)
{
enum vxge_hw_status status = VXGE_HW_OK;
if (vp == NULL) {
status = VXGE_HW_ERR_INVALID_HANDLE;
goto exit;
}
status = __vxge_hw_vpath_rts_table_set(vp,
VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_ADD_ENTRY,
VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_VID,
0, VXGE_HW_RTS_ACCESS_STEER_DATA0_VLAN_ID(vid), 0);
exit:
return status;
}
/**
* vxge_hw_vpath_vid_delete - Delete the vlan id entry for this vpath
* to vlan id table.
* @vp: Vpath handle.
* @vid: vlan id to be added for this vpath into the list
*
* Adds the given vlan id into the list for this vpath.
* see also: vxge_hw_vpath_vid_add
*
*/
enum vxge_hw_status
vxge_hw_vpath_vid_delete(struct __vxge_hw_vpath_handle *vp, u64 vid)
{
enum vxge_hw_status status = VXGE_HW_OK;
if (vp == NULL) {
status = VXGE_HW_ERR_INVALID_HANDLE;
goto exit;
}
status = __vxge_hw_vpath_rts_table_set(vp,
VXGE_HW_RTS_ACCESS_STEER_CTRL_ACTION_DELETE_ENTRY,
VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_VID,
0, VXGE_HW_RTS_ACCESS_STEER_DATA0_VLAN_ID(vid), 0);
exit:
return status;
}
/**
* vxge_hw_vpath_promisc_enable - Enable promiscuous mode.
* @vp: Vpath handle.
*
* Enable promiscuous mode of Titan-e operation.
*
* See also: vxge_hw_vpath_promisc_disable().
*/
enum vxge_hw_status vxge_hw_vpath_promisc_enable(
struct __vxge_hw_vpath_handle *vp)
{
u64 val64;
struct __vxge_hw_virtualpath *vpath;
enum vxge_hw_status status = VXGE_HW_OK;
if ((vp == NULL) || (vp->vpath->ringh == NULL)) {
status = VXGE_HW_ERR_INVALID_HANDLE;
goto exit;
}
vpath = vp->vpath;
/* Enable promiscuous mode for function 0 only */
if (!(vpath->hldev->access_rights &
VXGE_HW_DEVICE_ACCESS_RIGHT_MRPCIM))
return VXGE_HW_OK;
val64 = readq(&vpath->vp_reg->rxmac_vcfg0);
if (!(val64 & VXGE_HW_RXMAC_VCFG0_UCAST_ALL_ADDR_EN)) {
val64 |= VXGE_HW_RXMAC_VCFG0_UCAST_ALL_ADDR_EN |
VXGE_HW_RXMAC_VCFG0_MCAST_ALL_ADDR_EN |
VXGE_HW_RXMAC_VCFG0_BCAST_EN |
VXGE_HW_RXMAC_VCFG0_ALL_VID_EN;
writeq(val64, &vpath->vp_reg->rxmac_vcfg0);
}
exit:
return status;
}
/**
* vxge_hw_vpath_promisc_disable - Disable promiscuous mode.
* @vp: Vpath handle.
*
* Disable promiscuous mode of Titan-e operation.
*
* See also: vxge_hw_vpath_promisc_enable().
*/
enum vxge_hw_status vxge_hw_vpath_promisc_disable(
struct __vxge_hw_vpath_handle *vp)
{
u64 val64;
struct __vxge_hw_virtualpath *vpath;
enum vxge_hw_status status = VXGE_HW_OK;
if ((vp == NULL) || (vp->vpath->ringh == NULL)) {
status = VXGE_HW_ERR_INVALID_HANDLE;
goto exit;
}
vpath = vp->vpath;
val64 = readq(&vpath->vp_reg->rxmac_vcfg0);
if (val64 & VXGE_HW_RXMAC_VCFG0_UCAST_ALL_ADDR_EN) {
val64 &= ~(VXGE_HW_RXMAC_VCFG0_UCAST_ALL_ADDR_EN |
VXGE_HW_RXMAC_VCFG0_MCAST_ALL_ADDR_EN |
VXGE_HW_RXMAC_VCFG0_ALL_VID_EN);
writeq(val64, &vpath->vp_reg->rxmac_vcfg0);
}
exit:
return status;
}
/*
* vxge_hw_vpath_bcast_enable - Enable broadcast
* @vp: Vpath handle.
*
* Enable receiving broadcasts.
*/
enum vxge_hw_status vxge_hw_vpath_bcast_enable(
struct __vxge_hw_vpath_handle *vp)
{
u64 val64;
struct __vxge_hw_virtualpath *vpath;
enum vxge_hw_status status = VXGE_HW_OK;
if ((vp == NULL) || (vp->vpath->ringh == NULL)) {
status = VXGE_HW_ERR_INVALID_HANDLE;
goto exit;
}
vpath = vp->vpath;
val64 = readq(&vpath->vp_reg->rxmac_vcfg0);
if (!(val64 & VXGE_HW_RXMAC_VCFG0_BCAST_EN)) {
val64 |= VXGE_HW_RXMAC_VCFG0_BCAST_EN;
writeq(val64, &vpath->vp_reg->rxmac_vcfg0);
}
exit:
return status;
}
/**
* vxge_hw_vpath_mcast_enable - Enable multicast addresses.
* @vp: Vpath handle.
*
* Enable Titan-e multicast addresses.
* Returns: VXGE_HW_OK on success.
*
*/
enum vxge_hw_status vxge_hw_vpath_mcast_enable(
struct __vxge_hw_vpath_handle *vp)
{
u64 val64;
struct __vxge_hw_virtualpath *vpath;
enum vxge_hw_status status = VXGE_HW_OK;
if ((vp == NULL) || (vp->vpath->ringh == NULL)) {
status = VXGE_HW_ERR_INVALID_HANDLE;
goto exit;
}
vpath = vp->vpath;
val64 = readq(&vpath->vp_reg->rxmac_vcfg0);
if (!(val64 & VXGE_HW_RXMAC_VCFG0_MCAST_ALL_ADDR_EN)) {
val64 |= VXGE_HW_RXMAC_VCFG0_MCAST_ALL_ADDR_EN;
writeq(val64, &vpath->vp_reg->rxmac_vcfg0);
}
exit:
return status;
}
/**
* vxge_hw_vpath_mcast_disable - Disable multicast addresses.
* @vp: Vpath handle.
*
* Disable Titan-e multicast addresses.
* Returns: VXGE_HW_OK - success.
* VXGE_HW_ERR_INVALID_HANDLE - Invalid handle
*
*/
enum vxge_hw_status
vxge_hw_vpath_mcast_disable(struct __vxge_hw_vpath_handle *vp)
{
u64 val64;
struct __vxge_hw_virtualpath *vpath;
enum vxge_hw_status status = VXGE_HW_OK;
if ((vp == NULL) || (vp->vpath->ringh == NULL)) {
status = VXGE_HW_ERR_INVALID_HANDLE;
goto exit;
}
vpath = vp->vpath;
val64 = readq(&vpath->vp_reg->rxmac_vcfg0);
if (val64 & VXGE_HW_RXMAC_VCFG0_MCAST_ALL_ADDR_EN) {
val64 &= ~VXGE_HW_RXMAC_VCFG0_MCAST_ALL_ADDR_EN;
writeq(val64, &vpath->vp_reg->rxmac_vcfg0);
}
exit:
return status;
}
/*
* vxge_hw_vpath_alarm_process - Process Alarms.
* @vpath: Virtual Path.
* @skip_alarms: Do not clear the alarms
*
* Process vpath alarms.
*
*/
enum vxge_hw_status vxge_hw_vpath_alarm_process(
struct __vxge_hw_vpath_handle *vp,
u32 skip_alarms)
{
enum vxge_hw_status status = VXGE_HW_OK;
if (vp == NULL) {
status = VXGE_HW_ERR_INVALID_HANDLE;
goto exit;
}
status = __vxge_hw_vpath_alarm_process(vp->vpath, skip_alarms);
exit:
return status;
}
/**
* vxge_hw_vpath_msix_set - Associate MSIX vectors with TIM interrupts and
* alrms
* @vp: Virtual Path handle.
* @tim_msix_id: MSIX vectors associated with VXGE_HW_MAX_INTR_PER_VP number of
* interrupts(Can be repeated). If fifo or ring are not enabled
* the MSIX vector for that should be set to 0
* @alarm_msix_id: MSIX vector for alarm.
*
* This API will associate a given MSIX vector numbers with the four TIM
* interrupts and alarm interrupt.
*/
void
vxge_hw_vpath_msix_set(struct __vxge_hw_vpath_handle *vp, int *tim_msix_id,
int alarm_msix_id)
{
u64 val64;
struct __vxge_hw_virtualpath *vpath = vp->vpath;
struct vxge_hw_vpath_reg __iomem *vp_reg = vpath->vp_reg;
u32 vp_id = vp->vpath->vp_id;
val64 = VXGE_HW_INTERRUPT_CFG0_GROUP0_MSIX_FOR_TXTI(
(vp_id * 4) + tim_msix_id[0]) |
VXGE_HW_INTERRUPT_CFG0_GROUP1_MSIX_FOR_TXTI(
(vp_id * 4) + tim_msix_id[1]);
writeq(val64, &vp_reg->interrupt_cfg0);
writeq(VXGE_HW_INTERRUPT_CFG2_ALARM_MAP_TO_MSG(
(vpath->hldev->first_vp_id * 4) + alarm_msix_id),
&vp_reg->interrupt_cfg2);
if (vpath->hldev->config.intr_mode ==
VXGE_HW_INTR_MODE_MSIX_ONE_SHOT) {
__vxge_hw_pio_mem_write32_upper((u32)vxge_bVALn(
VXGE_HW_ONE_SHOT_VECT0_EN_ONE_SHOT_VECT0_EN,
0, 32), &vp_reg->one_shot_vect0_en);
__vxge_hw_pio_mem_write32_upper((u32)vxge_bVALn(
VXGE_HW_ONE_SHOT_VECT1_EN_ONE_SHOT_VECT1_EN,
0, 32), &vp_reg->one_shot_vect1_en);
__vxge_hw_pio_mem_write32_upper((u32)vxge_bVALn(
VXGE_HW_ONE_SHOT_VECT2_EN_ONE_SHOT_VECT2_EN,
0, 32), &vp_reg->one_shot_vect2_en);
}
}
/**
* vxge_hw_vpath_msix_mask - Mask MSIX Vector.
* @vp: Virtual Path handle.
* @msix_id: MSIX ID
*
* The function masks the msix interrupt for the given msix_id
*
* Returns: 0,
* Otherwise, VXGE_HW_ERR_WRONG_IRQ if the msix index is out of range
* status.
* See also:
*/
void
vxge_hw_vpath_msix_mask(struct __vxge_hw_vpath_handle *vp, int msix_id)
{
struct __vxge_hw_device *hldev = vp->vpath->hldev;
__vxge_hw_pio_mem_write32_upper(
(u32) vxge_bVALn(vxge_mBIT(msix_id >> 2), 0, 32),
&hldev->common_reg->set_msix_mask_vect[msix_id % 4]);
}
/**
* vxge_hw_vpath_msix_clear - Clear MSIX Vector.
* @vp: Virtual Path handle.
* @msix_id: MSI ID
*
* The function clears the msix interrupt for the given msix_id
*
* Returns: 0,
* Otherwise, VXGE_HW_ERR_WRONG_IRQ if the msix index is out of range
* status.
* See also:
*/
void vxge_hw_vpath_msix_clear(struct __vxge_hw_vpath_handle *vp, int msix_id)
{
struct __vxge_hw_device *hldev = vp->vpath->hldev;
if (hldev->config.intr_mode == VXGE_HW_INTR_MODE_MSIX_ONE_SHOT)
__vxge_hw_pio_mem_write32_upper(
(u32) vxge_bVALn(vxge_mBIT((msix_id >> 2)), 0, 32),
&hldev->common_reg->clr_msix_one_shot_vec[msix_id % 4]);
else
__vxge_hw_pio_mem_write32_upper(
(u32) vxge_bVALn(vxge_mBIT((msix_id >> 2)), 0, 32),
&hldev->common_reg->clear_msix_mask_vect[msix_id % 4]);
}
/**
* vxge_hw_vpath_msix_unmask - Unmask the MSIX Vector.
* @vp: Virtual Path handle.
* @msix_id: MSI ID
*
* The function unmasks the msix interrupt for the given msix_id
*
* Returns: 0,
* Otherwise, VXGE_HW_ERR_WRONG_IRQ if the msix index is out of range
* status.
* See also:
*/
void
vxge_hw_vpath_msix_unmask(struct __vxge_hw_vpath_handle *vp, int msix_id)
{
struct __vxge_hw_device *hldev = vp->vpath->hldev;
__vxge_hw_pio_mem_write32_upper(
(u32)vxge_bVALn(vxge_mBIT(msix_id >> 2), 0, 32),
&hldev->common_reg->clear_msix_mask_vect[msix_id%4]);
}
/**
* vxge_hw_vpath_inta_mask_tx_rx - Mask Tx and Rx interrupts.
* @vp: Virtual Path handle.
*
* Mask Tx and Rx vpath interrupts.
*
* See also: vxge_hw_vpath_inta_mask_tx_rx()
*/
void vxge_hw_vpath_inta_mask_tx_rx(struct __vxge_hw_vpath_handle *vp)
{
u64 tim_int_mask0[4] = {[0 ...3] = 0};
u32 tim_int_mask1[4] = {[0 ...3] = 0};
u64 val64;
struct __vxge_hw_device *hldev = vp->vpath->hldev;
VXGE_HW_DEVICE_TIM_INT_MASK_SET(tim_int_mask0,
tim_int_mask1, vp->vpath->vp_id);
val64 = readq(&hldev->common_reg->tim_int_mask0);
if ((tim_int_mask0[VXGE_HW_VPATH_INTR_TX] != 0) ||
(tim_int_mask0[VXGE_HW_VPATH_INTR_RX] != 0)) {
writeq((tim_int_mask0[VXGE_HW_VPATH_INTR_TX] |
tim_int_mask0[VXGE_HW_VPATH_INTR_RX] | val64),
&hldev->common_reg->tim_int_mask0);
}
val64 = readl(&hldev->common_reg->tim_int_mask1);
if ((tim_int_mask1[VXGE_HW_VPATH_INTR_TX] != 0) ||
(tim_int_mask1[VXGE_HW_VPATH_INTR_RX] != 0)) {
__vxge_hw_pio_mem_write32_upper(
(tim_int_mask1[VXGE_HW_VPATH_INTR_TX] |
tim_int_mask1[VXGE_HW_VPATH_INTR_RX] | val64),
&hldev->common_reg->tim_int_mask1);
}
}
/**
* vxge_hw_vpath_inta_unmask_tx_rx - Unmask Tx and Rx interrupts.
* @vp: Virtual Path handle.
*
* Unmask Tx and Rx vpath interrupts.
*
* See also: vxge_hw_vpath_inta_mask_tx_rx()
*/
void vxge_hw_vpath_inta_unmask_tx_rx(struct __vxge_hw_vpath_handle *vp)
{
u64 tim_int_mask0[4] = {[0 ...3] = 0};
u32 tim_int_mask1[4] = {[0 ...3] = 0};
u64 val64;
struct __vxge_hw_device *hldev = vp->vpath->hldev;
VXGE_HW_DEVICE_TIM_INT_MASK_SET(tim_int_mask0,
tim_int_mask1, vp->vpath->vp_id);
val64 = readq(&hldev->common_reg->tim_int_mask0);
if ((tim_int_mask0[VXGE_HW_VPATH_INTR_TX] != 0) ||
(tim_int_mask0[VXGE_HW_VPATH_INTR_RX] != 0)) {
writeq((~(tim_int_mask0[VXGE_HW_VPATH_INTR_TX] |
tim_int_mask0[VXGE_HW_VPATH_INTR_RX])) & val64,
&hldev->common_reg->tim_int_mask0);
}
if ((tim_int_mask1[VXGE_HW_VPATH_INTR_TX] != 0) ||
(tim_int_mask1[VXGE_HW_VPATH_INTR_RX] != 0)) {
__vxge_hw_pio_mem_write32_upper(
(~(tim_int_mask1[VXGE_HW_VPATH_INTR_TX] |
tim_int_mask1[VXGE_HW_VPATH_INTR_RX])) & val64,
&hldev->common_reg->tim_int_mask1);
}
}
/**
* vxge_hw_vpath_poll_rx - Poll Rx Virtual Path for completed
* descriptors and process the same.
* @ring: Handle to the ring object used for receive
*
* The function polls the Rx for the completed descriptors and calls
* the driver via supplied completion callback.
*
* Returns: VXGE_HW_OK, if the polling is completed successful.
* VXGE_HW_COMPLETIONS_REMAIN: There are still more completed
* descriptors available which are yet to be processed.
*
* See also: vxge_hw_vpath_poll_rx()
*/
enum vxge_hw_status vxge_hw_vpath_poll_rx(struct __vxge_hw_ring *ring)
{
u8 t_code;
enum vxge_hw_status status = VXGE_HW_OK;
void *first_rxdh;
int new_count = 0;
ring->cmpl_cnt = 0;
status = vxge_hw_ring_rxd_next_completed(ring, &first_rxdh, &t_code);
if (status == VXGE_HW_OK)
ring->callback(ring, first_rxdh,
t_code, ring->channel.userdata);
if (ring->cmpl_cnt != 0) {
ring->doorbell_cnt += ring->cmpl_cnt;
if (ring->doorbell_cnt >= ring->rxds_limit) {
/*
* Each RxD is of 4 qwords, update the number of
* qwords replenished
*/
new_count = (ring->doorbell_cnt * 4);
/* For each block add 4 more qwords */
ring->total_db_cnt += ring->doorbell_cnt;
if (ring->total_db_cnt >= ring->rxds_per_block) {
new_count += 4;
/* Reset total count */
ring->total_db_cnt %= ring->rxds_per_block;
}
writeq(VXGE_HW_PRC_RXD_DOORBELL_NEW_QW_CNT(new_count),
&ring->vp_reg->prc_rxd_doorbell);
readl(&ring->common_reg->titan_general_int_status);
ring->doorbell_cnt = 0;
}
}
return status;
}
/**
* vxge_hw_vpath_poll_tx - Poll Tx for completed descriptors and process the same.
* @fifo: Handle to the fifo object used for non offload send
* @skb_ptr: pointer to skb
* @nr_skb: number of skbs
* @more: more is coming
*
* The function polls the Tx for the completed descriptors and calls
* the driver via supplied completion callback.
*
* Returns: VXGE_HW_OK, if the polling is completed successful.
* VXGE_HW_COMPLETIONS_REMAIN: There are still more completed
* descriptors available which are yet to be processed.
*/
enum vxge_hw_status vxge_hw_vpath_poll_tx(struct __vxge_hw_fifo *fifo,
struct sk_buff ***skb_ptr, int nr_skb,
int *more)
{
enum vxge_hw_fifo_tcode t_code;
void *first_txdlh;
enum vxge_hw_status status = VXGE_HW_OK;
struct __vxge_hw_channel *channel;
channel = &fifo->channel;
status = vxge_hw_fifo_txdl_next_completed(fifo,
&first_txdlh, &t_code);
if (status == VXGE_HW_OK)
if (fifo->callback(fifo, first_txdlh, t_code,
channel->userdata, skb_ptr, nr_skb, more) != VXGE_HW_OK)
status = VXGE_HW_COMPLETIONS_REMAIN;
return status;
}
/******************************************************************************
* This software may be used and distributed according to the terms of
* the GNU General Public License (GPL), incorporated herein by reference.
* Drivers based on or derived from this code fall under the GPL and must
* retain the authorship, copyright and license notice. This file is not
* a complete program and may only be used when the entire operating
* system is licensed under the GPL.
* See the file COPYING in this distribution for more information.
*
* vxge-traffic.h: Driver for Exar Corp's X3100 Series 10GbE PCIe I/O
* Virtualized Server Adapter.
* Copyright(c) 2002-2010 Exar Corp.
******************************************************************************/
#ifndef VXGE_TRAFFIC_H
#define VXGE_TRAFFIC_H
#include "vxge-reg.h"
#include "vxge-version.h"
#define VXGE_HW_DTR_MAX_T_CODE 16
#define VXGE_HW_ALL_FOXES 0xFFFFFFFFFFFFFFFFULL
#define VXGE_HW_INTR_MASK_ALL 0xFFFFFFFFFFFFFFFFULL
#define VXGE_HW_MAX_VIRTUAL_PATHS 17
#define VXGE_HW_MAC_MAX_MAC_PORT_ID 2
#define VXGE_HW_DEFAULT_32 0xffffffff
/* frames sizes */
#define VXGE_HW_HEADER_802_2_SIZE 3
#define VXGE_HW_HEADER_SNAP_SIZE 5
#define VXGE_HW_HEADER_VLAN_SIZE 4
#define VXGE_HW_MAC_HEADER_MAX_SIZE \
(ETH_HLEN + \
VXGE_HW_HEADER_802_2_SIZE + \
VXGE_HW_HEADER_VLAN_SIZE + \
VXGE_HW_HEADER_SNAP_SIZE)
/* 32bit alignments */
#define VXGE_HW_HEADER_ETHERNET_II_802_3_ALIGN 2
#define VXGE_HW_HEADER_802_2_SNAP_ALIGN 2
#define VXGE_HW_HEADER_802_2_ALIGN 3
#define VXGE_HW_HEADER_SNAP_ALIGN 1
#define VXGE_HW_L3_CKSUM_OK 0xFFFF
#define VXGE_HW_L4_CKSUM_OK 0xFFFF
/* Forward declarations */
struct __vxge_hw_device;
struct __vxge_hw_vpath_handle;
struct vxge_hw_vp_config;
struct __vxge_hw_virtualpath;
struct __vxge_hw_channel;
struct __vxge_hw_fifo;
struct __vxge_hw_ring;
struct vxge_hw_ring_attr;
struct vxge_hw_mempool;
#ifndef TRUE
#define TRUE 1
#endif
#ifndef FALSE
#define FALSE 0
#endif
/*VXGE_HW_STATUS_H*/
#define VXGE_HW_EVENT_BASE 0
#define VXGE_LL_EVENT_BASE 100
/**
* enum vxge_hw_event- Enumerates slow-path HW events.
* @VXGE_HW_EVENT_UNKNOWN: Unknown (and invalid) event.
* @VXGE_HW_EVENT_SERR: Serious vpath hardware error event.
* @VXGE_HW_EVENT_ECCERR: vpath ECC error event.
* @VXGE_HW_EVENT_VPATH_ERR: Error local to the respective vpath
* @VXGE_HW_EVENT_FIFO_ERR: FIFO Doorbell fifo error.
* @VXGE_HW_EVENT_SRPCIM_SERR: srpcim hardware error event.
* @VXGE_HW_EVENT_MRPCIM_SERR: mrpcim hardware error event.
* @VXGE_HW_EVENT_MRPCIM_ECCERR: mrpcim ecc error event.
* @VXGE_HW_EVENT_RESET_START: Privileged entity is starting device reset
* @VXGE_HW_EVENT_RESET_COMPLETE: Device reset has been completed
* @VXGE_HW_EVENT_SLOT_FREEZE: Slot-freeze event. Driver tries to distinguish
* slot-freeze from the rest critical events (e.g. ECC) when it is
* impossible to PIO read "through" the bus, i.e. when getting all-foxes.
*
* enum vxge_hw_event enumerates slow-path HW eventis.
*
* See also: struct vxge_hw_uld_cbs{}, vxge_uld_link_up_f{},
* vxge_uld_link_down_f{}.
*/
enum vxge_hw_event {
VXGE_HW_EVENT_UNKNOWN = 0,
/* HW events */
VXGE_HW_EVENT_RESET_START = VXGE_HW_EVENT_BASE + 1,
VXGE_HW_EVENT_RESET_COMPLETE = VXGE_HW_EVENT_BASE + 2,
VXGE_HW_EVENT_LINK_DOWN = VXGE_HW_EVENT_BASE + 3,
VXGE_HW_EVENT_LINK_UP = VXGE_HW_EVENT_BASE + 4,
VXGE_HW_EVENT_ALARM_CLEARED = VXGE_HW_EVENT_BASE + 5,
VXGE_HW_EVENT_ECCERR = VXGE_HW_EVENT_BASE + 6,
VXGE_HW_EVENT_MRPCIM_ECCERR = VXGE_HW_EVENT_BASE + 7,
VXGE_HW_EVENT_FIFO_ERR = VXGE_HW_EVENT_BASE + 8,
VXGE_HW_EVENT_VPATH_ERR = VXGE_HW_EVENT_BASE + 9,
VXGE_HW_EVENT_CRITICAL_ERR = VXGE_HW_EVENT_BASE + 10,
VXGE_HW_EVENT_SERR = VXGE_HW_EVENT_BASE + 11,
VXGE_HW_EVENT_SRPCIM_SERR = VXGE_HW_EVENT_BASE + 12,
VXGE_HW_EVENT_MRPCIM_SERR = VXGE_HW_EVENT_BASE + 13,
VXGE_HW_EVENT_SLOT_FREEZE = VXGE_HW_EVENT_BASE + 14,
};
#define VXGE_HW_SET_LEVEL(a, b) (((a) > (b)) ? (a) : (b))
/*
* struct vxge_hw_mempool_dma - Represents DMA objects passed to the
caller.
*/
struct vxge_hw_mempool_dma {
dma_addr_t addr;
struct pci_dev *handle;
struct pci_dev *acc_handle;
};
/*
* vxge_hw_mempool_item_f - Mempool item alloc/free callback
* @mempoolh: Memory pool handle.
* @memblock: Address of memory block
* @memblock_index: Index of memory block
* @item: Item that gets allocated or freed.
* @index: Item's index in the memory pool.
* @is_last: True, if this item is the last one in the pool; false - otherwise.
* userdata: Per-pool user context.
*
* Memory pool allocation/deallocation callback.
*/
/*
* struct vxge_hw_mempool - Memory pool.
*/
struct vxge_hw_mempool {
void (*item_func_alloc)(
struct vxge_hw_mempool *mempoolh,
u32 memblock_index,
struct vxge_hw_mempool_dma *dma_object,
u32 index,
u32 is_last);
void *userdata;
void **memblocks_arr;
void **memblocks_priv_arr;
struct vxge_hw_mempool_dma *memblocks_dma_arr;
struct __vxge_hw_device *devh;
u32 memblock_size;
u32 memblocks_max;
u32 memblocks_allocated;
u32 item_size;
u32 items_max;
u32 items_initial;
u32 items_current;
u32 items_per_memblock;
void **items_arr;
u32 items_priv_size;
};
#define VXGE_HW_MAX_INTR_PER_VP 4
#define VXGE_HW_VPATH_INTR_TX 0
#define VXGE_HW_VPATH_INTR_RX 1
#define VXGE_HW_VPATH_INTR_EINTA 2
#define VXGE_HW_VPATH_INTR_BMAP 3
#define VXGE_HW_BLOCK_SIZE 4096
/**
* struct vxge_hw_tim_intr_config - Titan Tim interrupt configuration.
* @intr_enable: Set to 1, if interrupt is enabled.
* @btimer_val: Boundary Timer Initialization value in units of 272 ns.
* @timer_ac_en: Timer Automatic Cancel. 1 : Automatic Canceling Enable: when
* asserted, other interrupt-generating entities will cancel the
* scheduled timer interrupt.
* @timer_ci_en: Timer Continuous Interrupt. 1 : Continuous Interrupting Enable:
* When asserted, an interrupt will be generated every time the
* boundary timer expires, even if no traffic has been transmitted
* on this interrupt.
* @timer_ri_en: Timer Consecutive (Re-) Interrupt 1 : Consecutive
* (Re-) Interrupt Enable: When asserted, an interrupt will be
* generated the next time the timer expires, even if no traffic has
* been transmitted on this interrupt. (This will only happen once
* each time that this value is written to the TIM.) This bit is
* cleared by H/W at the end of the current-timer-interval when
* the interrupt is triggered.
* @rtimer_val: Restriction Timer Initialization value in units of 272 ns.
* @util_sel: Utilization Selector. Selects which of the workload approximations
* to use (e.g. legacy Tx utilization, Tx/Rx utilization, host
* specified utilization etc.), selects one of
* the 17 host configured values.
* 0-Virtual Path 0
* 1-Virtual Path 1
* ...
* 16-Virtual Path 17
* 17-Legacy Tx network utilization, provided by TPA
* 18-Legacy Rx network utilization, provided by FAU
* 19-Average of legacy Rx and Tx utilization calculated from link
* utilization values.
* 20-31-Invalid configurations
* 32-Host utilization for Virtual Path 0
* 33-Host utilization for Virtual Path 1
* ...
* 48-Host utilization for Virtual Path 17
* 49-Legacy Tx network utilization, provided by TPA
* 50-Legacy Rx network utilization, provided by FAU
* 51-Average of legacy Rx and Tx utilization calculated from
* link utilization values.
* 52-63-Invalid configurations
* @ltimer_val: Latency Timer Initialization Value in units of 272 ns.
* @txd_cnt_en: TxD Return Event Count Enable. This configuration bit when set
* to 1 enables counting of TxD0 returns (signalled by PCC's),
* towards utilization event count values.
* @urange_a: Defines the upper limit (in percent) for this utilization range
* to be active. This range is considered active
* if 0 = UTIL = URNG_A
* and the UEC_A field (below) is non-zero.
* @uec_a: Utilization Event Count A. If this range is active, the adapter will
* wait until UEC_A events have occurred on the interrupt before
* generating an interrupt.
* @urange_b: Link utilization range B.
* @uec_b: Utilization Event Count B.
* @urange_c: Link utilization range C.
* @uec_c: Utilization Event Count C.
* @urange_d: Link utilization range D.
* @uec_d: Utilization Event Count D.
* Traffic Interrupt Controller Module interrupt configuration.
*/
struct vxge_hw_tim_intr_config {
u32 intr_enable;
#define VXGE_HW_TIM_INTR_ENABLE 1
#define VXGE_HW_TIM_INTR_DISABLE 0
#define VXGE_HW_TIM_INTR_DEFAULT 0
u32 btimer_val;
#define VXGE_HW_MIN_TIM_BTIMER_VAL 0
#define VXGE_HW_MAX_TIM_BTIMER_VAL 67108864
#define VXGE_HW_USE_FLASH_DEFAULT (~0)
u32 timer_ac_en;
#define VXGE_HW_TIM_TIMER_AC_ENABLE 1
#define VXGE_HW_TIM_TIMER_AC_DISABLE 0
u32 timer_ci_en;
#define VXGE_HW_TIM_TIMER_CI_ENABLE 1
#define VXGE_HW_TIM_TIMER_CI_DISABLE 0
u32 timer_ri_en;
#define VXGE_HW_TIM_TIMER_RI_ENABLE 1
#define VXGE_HW_TIM_TIMER_RI_DISABLE 0
u32 rtimer_val;
#define VXGE_HW_MIN_TIM_RTIMER_VAL 0
#define VXGE_HW_MAX_TIM_RTIMER_VAL 67108864
u32 util_sel;
#define VXGE_HW_TIM_UTIL_SEL_LEGACY_TX_NET_UTIL 17
#define VXGE_HW_TIM_UTIL_SEL_LEGACY_RX_NET_UTIL 18
#define VXGE_HW_TIM_UTIL_SEL_LEGACY_TX_RX_AVE_NET_UTIL 19
#define VXGE_HW_TIM_UTIL_SEL_PER_VPATH 63
u32 ltimer_val;
#define VXGE_HW_MIN_TIM_LTIMER_VAL 0
#define VXGE_HW_MAX_TIM_LTIMER_VAL 67108864
/* Line utilization interrupts */
u32 urange_a;
#define VXGE_HW_MIN_TIM_URANGE_A 0
#define VXGE_HW_MAX_TIM_URANGE_A 100
u32 uec_a;
#define VXGE_HW_MIN_TIM_UEC_A 0
#define VXGE_HW_MAX_TIM_UEC_A 65535
u32 urange_b;
#define VXGE_HW_MIN_TIM_URANGE_B 0
#define VXGE_HW_MAX_TIM_URANGE_B 100
u32 uec_b;
#define VXGE_HW_MIN_TIM_UEC_B 0
#define VXGE_HW_MAX_TIM_UEC_B 65535
u32 urange_c;
#define VXGE_HW_MIN_TIM_URANGE_C 0
#define VXGE_HW_MAX_TIM_URANGE_C 100
u32 uec_c;
#define VXGE_HW_MIN_TIM_UEC_C 0
#define VXGE_HW_MAX_TIM_UEC_C 65535
u32 uec_d;
#define VXGE_HW_MIN_TIM_UEC_D 0
#define VXGE_HW_MAX_TIM_UEC_D 65535
};
#define VXGE_HW_STATS_OP_READ 0
#define VXGE_HW_STATS_OP_CLEAR_STAT 1
#define VXGE_HW_STATS_OP_CLEAR_ALL_VPATH_STATS 2
#define VXGE_HW_STATS_OP_CLEAR_ALL_STATS_OF_LOC 2
#define VXGE_HW_STATS_OP_CLEAR_ALL_STATS 3
#define VXGE_HW_STATS_LOC_AGGR 17
#define VXGE_HW_STATS_AGGRn_OFFSET 0x00720
#define VXGE_HW_STATS_VPATH_TX_OFFSET 0x0
#define VXGE_HW_STATS_VPATH_RX_OFFSET 0x00090
#define VXGE_HW_STATS_VPATH_PROG_EVENT_VNUM0_OFFSET (0x001d0 >> 3)
#define VXGE_HW_STATS_GET_VPATH_PROG_EVENT_VNUM0(bits) \
vxge_bVALn(bits, 0, 32)
#define VXGE_HW_STATS_GET_VPATH_PROG_EVENT_VNUM1(bits) \
vxge_bVALn(bits, 32, 32)
#define VXGE_HW_STATS_VPATH_PROG_EVENT_VNUM2_OFFSET (0x001d8 >> 3)
#define VXGE_HW_STATS_GET_VPATH_PROG_EVENT_VNUM2(bits) \
vxge_bVALn(bits, 0, 32)
#define VXGE_HW_STATS_GET_VPATH_PROG_EVENT_VNUM3(bits) \
vxge_bVALn(bits, 32, 32)
/**
* struct vxge_hw_xmac_aggr_stats - Per-Aggregator XMAC Statistics
*
* @tx_frms: Count of data frames transmitted on this Aggregator on all
* its Aggregation ports. Does not include LACPDUs or Marker PDUs.
* However, does include frames discarded by the Distribution
* function.
* @tx_data_octets: Count of data and padding octets of frames transmitted
* on this Aggregator on all its Aggregation ports. Does not include
* octets of LACPDUs or Marker PDUs. However, does include octets of
* frames discarded by the Distribution function.
* @tx_mcast_frms: Count of data frames transmitted (to a group destination
* address other than the broadcast address) on this Aggregator on
* all its Aggregation ports. Does not include LACPDUs or Marker
* PDUs. However, does include frames discarded by the Distribution
* function.
* @tx_bcast_frms: Count of broadcast data frames transmitted on this Aggregator
* on all its Aggregation ports. Does not include LACPDUs or Marker
* PDUs. However, does include frames discarded by the Distribution
* function.
* @tx_discarded_frms: Count of data frames to be transmitted on this Aggregator
* that are discarded by the Distribution function. This occurs when
* conversation are allocated to different ports and have to be
* flushed on old ports
* @tx_errored_frms: Count of data frames transmitted on this Aggregator that
* experience transmission errors on its Aggregation ports.
* @rx_frms: Count of data frames received on this Aggregator on all its
* Aggregation ports. Does not include LACPDUs or Marker PDUs.
* Also, does not include frames discarded by the Collection
* function.
* @rx_data_octets: Count of data and padding octets of frames received on this
* Aggregator on all its Aggregation ports. Does not include octets
* of LACPDUs or Marker PDUs. Also, does not include
* octets of frames
* discarded by the Collection function.
* @rx_mcast_frms: Count of data frames received (from a group destination
* address other than the broadcast address) on this Aggregator on
* all its Aggregation ports. Does not include LACPDUs or Marker
* PDUs. Also, does not include frames discarded by the Collection
* function.
* @rx_bcast_frms: Count of broadcast data frames received on this Aggregator on
* all its Aggregation ports. Does not include LACPDUs or Marker
* PDUs. Also, does not include frames discarded by the Collection
* function.
* @rx_discarded_frms: Count of data frames received on this Aggregator that are
* discarded by the Collection function because the Collection
* function was disabled on the port which the frames are received.
* @rx_errored_frms: Count of data frames received on this Aggregator that are
* discarded by its Aggregation ports, or are discarded by the
* Collection function of the Aggregator, or that are discarded by
* the Aggregator due to detection of an illegal Slow Protocols PDU.
* @rx_unknown_slow_proto_frms: Count of data frames received on this Aggregator
* that are discarded by its Aggregation ports due to detection of
* an unknown Slow Protocols PDU.
*
* Per aggregator XMAC RX statistics.
*/
struct vxge_hw_xmac_aggr_stats {
/*0x000*/ u64 tx_frms;
/*0x008*/ u64 tx_data_octets;
/*0x010*/ u64 tx_mcast_frms;
/*0x018*/ u64 tx_bcast_frms;
/*0x020*/ u64 tx_discarded_frms;
/*0x028*/ u64 tx_errored_frms;
/*0x030*/ u64 rx_frms;
/*0x038*/ u64 rx_data_octets;
/*0x040*/ u64 rx_mcast_frms;
/*0x048*/ u64 rx_bcast_frms;
/*0x050*/ u64 rx_discarded_frms;
/*0x058*/ u64 rx_errored_frms;
/*0x060*/ u64 rx_unknown_slow_proto_frms;
} __packed;
/**
* struct vxge_hw_xmac_port_stats - XMAC Port Statistics
*
* @tx_ttl_frms: Count of successfully transmitted MAC frames
* @tx_ttl_octets: Count of total octets of transmitted frames, not including
* framing characters (i.e. less framing bits). To determine the
* total octets of transmitted frames, including framing characters,
* multiply PORTn_TX_TTL_FRMS by 8 and add it to this stat (unless
* otherwise configured, this stat only counts frames that have
* 8 bytes of preamble for each frame). This stat can be configured
* (see XMAC_STATS_GLOBAL_CFG.TTL_FRMS_HANDLING) to count everything
* including the preamble octets.
* @tx_data_octets: Count of data and padding octets of successfully transmitted
* frames.
* @tx_mcast_frms: Count of successfully transmitted frames to a group address
* other than the broadcast address.
* @tx_bcast_frms: Count of successfully transmitted frames to the broadcast
* group address.
* @tx_ucast_frms: Count of transmitted frames containing a unicast address.
* Includes discarded frames that are not sent to the network.
* @tx_tagged_frms: Count of transmitted frames containing a VLAN tag.
* @tx_vld_ip: Count of transmitted IP datagrams that are passed to the network.
* @tx_vld_ip_octets: Count of total octets of transmitted IP datagrams that
* are passed to the network.
* @tx_icmp: Count of transmitted ICMP messages. Includes messages not sent
* due to problems within ICMP.
* @tx_tcp: Count of transmitted TCP segments. Does not include segments
* containing retransmitted octets.
* @tx_rst_tcp: Count of transmitted TCP segments containing the RST flag.
* @tx_udp: Count of transmitted UDP datagrams.
* @tx_parse_error: Increments when the TPA is unable to parse a packet. This
* generally occurs when a packet is corrupt somehow, including
* packets that have IP version mismatches, invalid Layer 2 control
* fields, etc. L3/L4 checksums are not offloaded, but the packet
* is still be transmitted.
* @tx_unknown_protocol: Increments when the TPA encounters an unknown
* protocol, such as a new IPv6 extension header, or an unsupported
* Routing Type. The packet still has a checksum calculated but it
* may be incorrect.
* @tx_pause_ctrl_frms: Count of MAC PAUSE control frames that are transmitted.
* Since, the only control frames supported by this device are
* PAUSE frames, this register is a count of all transmitted MAC
* control frames.
* @tx_marker_pdu_frms: Count of Marker PDUs transmitted
* on this Aggregation port.
* @tx_lacpdu_frms: Count of LACPDUs transmitted on this Aggregation port.
* @tx_drop_ip: Count of transmitted IP datagrams that could not be passed to
* the network. Increments because of:
* 1) An internal processing error
* (such as an uncorrectable ECC error). 2) A frame parsing error
* during IP checksum calculation.
* @tx_marker_resp_pdu_frms: Count of Marker Response PDUs transmitted on this
* Aggregation port.
* @tx_xgmii_char2_match: Maintains a count of the number of transmitted XGMII
* characters that match a pattern that is programmable through
* register XMAC_STATS_TX_XGMII_CHAR_PORTn. By default, the pattern
* is set to /T/ (i.e. the terminate character), thus the statistic
* tracks the number of transmitted Terminate characters.
* @tx_xgmii_char1_match: Maintains a count of the number of transmitted XGMII
* characters that match a pattern that is programmable through
* register XMAC_STATS_TX_XGMII_CHAR_PORTn. By default, the pattern
* is set to /S/ (i.e. the start character),
* thus the statistic tracks
* the number of transmitted Start characters.
* @tx_xgmii_column2_match: Maintains a count of the number of transmitted XGMII
* columns that match a pattern that is programmable through register
* XMAC_STATS_TX_XGMII_COLUMN2_PORTn. By default, the pattern is set
* to 4 x /E/ (i.e. a column containing all error characters), thus
* the statistic tracks the number of Error columns transmitted at
* any time. If XMAC_STATS_TX_XGMII_BEHAV_COLUMN2_PORTn.NEAR_COL1 is
* set to 1, then this stat increments when COLUMN2 is found within
* 'n' clocks after COLUMN1. Here, 'n' is defined by
* XMAC_STATS_TX_XGMII_BEHAV_COLUMN2_PORTn.NUM_COL (if 'n' is set
* to 0, then it means to search anywhere for COLUMN2).
* @tx_xgmii_column1_match: Maintains a count of the number of transmitted XGMII
* columns that match a pattern that is programmable through register
* XMAC_STATS_TX_XGMII_COLUMN1_PORTn. By default, the pattern is set
* to 4 x /I/ (i.e. a column containing all idle characters),
* thus the statistic tracks the number of transmitted Idle columns.
* @tx_any_err_frms: Count of transmitted frames containing any error that
* prevents them from being passed to the network. Increments if
* there is an ECC while reading the frame out of the transmit
* buffer. Also increments if the transmit protocol assist (TPA)
* block determines that the frame should not be sent.
* @tx_drop_frms: Count of frames that could not be sent for no other reason
* than internal MAC processing. Increments once whenever the
* transmit buffer is flushed (due to an ECC error on a memory
* descriptor).
* @rx_ttl_frms: Count of total received MAC frames, including frames received
* with frame-too-long, FCS, or length errors. This stat can be
* configured (see XMAC_STATS_GLOBAL_CFG.TTL_FRMS_HANDLING) to count
* everything, even "frames" as small one byte of preamble.
* @rx_vld_frms: Count of successfully received MAC frames. Does not include
* frames received with frame-too-long, FCS, or length errors.
* @rx_offload_frms: Count of offloaded received frames that are passed to
* the host.
* @rx_ttl_octets: Count of total octets of received frames, not including
* framing characters (i.e. less framing bits). To determine the
* total octets of received frames, including framing characters,
* multiply PORTn_RX_TTL_FRMS by 8 and add it to this stat (unless
* otherwise configured, this stat only counts frames that have 8
* bytes of preamble for each frame). This stat can be configured
* (see XMAC_STATS_GLOBAL_CFG.TTL_FRMS_HANDLING) to count everything,
* even the preamble octets of "frames" as small one byte of preamble
* @rx_data_octets: Count of data and padding octets of successfully received
* frames. Does not include frames received with frame-too-long,
* FCS, or length errors.
* @rx_offload_octets: Count of total octets, not including framing
* characters, of offloaded received frames that are passed
* to the host.
* @rx_vld_mcast_frms: Count of successfully received MAC frames containing a
* nonbroadcast group address. Does not include frames received
* with frame-too-long, FCS, or length errors.
* @rx_vld_bcast_frms: Count of successfully received MAC frames containing
* the broadcast group address. Does not include frames received
* with frame-too-long, FCS, or length errors.
* @rx_accepted_ucast_frms: Count of successfully received frames containing
* a unicast address. Only includes frames that are passed to
* the system.
* @rx_accepted_nucast_frms: Count of successfully received frames containing
* a non-unicast (broadcast or multicast) address. Only includes
* frames that are passed to the system. Could include, for instance,
* non-unicast frames that contain FCS errors if the MAC_ERROR_CFG
* register is set to pass FCS-errored frames to the host.
* @rx_tagged_frms: Count of received frames containing a VLAN tag.
* @rx_long_frms: Count of received frames that are longer than RX_MAX_PYLD_LEN
* + 18 bytes (+ 22 bytes if VLAN-tagged).
* @rx_usized_frms: Count of received frames of length (including FCS, but not
* framing bits) less than 64 octets, that are otherwise well-formed.
* In other words, counts runts.
* @rx_osized_frms: Count of received frames of length (including FCS, but not
* framing bits) more than 1518 octets, that are otherwise
* well-formed. Note: If register XMAC_STATS_GLOBAL_CFG.VLAN_HANDLING
* is set to 1, then "more than 1518 octets" becomes "more than 1518
* (1522 if VLAN-tagged) octets".
* @rx_frag_frms: Count of received frames of length (including FCS, but not
* framing bits) less than 64 octets that had bad FCS. In other
* words, counts fragments.
* @rx_jabber_frms: Count of received frames of length (including FCS, but not
* framing bits) more than 1518 octets that had bad FCS. In other
* words, counts jabbers. Note: If register
* XMAC_STATS_GLOBAL_CFG.VLAN_HANDLING is set to 1, then "more than
* 1518 octets" becomes "more than 1518 (1522 if VLAN-tagged)
* octets".
* @rx_ttl_64_frms: Count of total received MAC frames with length (including
* FCS, but not framing bits) of exactly 64 octets. Includes frames
* received with frame-too-long, FCS, or length errors.
* @rx_ttl_65_127_frms: Count of total received MAC frames with length
* (including FCS, but not framing bits) of between 65 and 127
* octets inclusive. Includes frames received with frame-too-long,
* FCS, or length errors.
* @rx_ttl_128_255_frms: Count of total received MAC frames with length
* (including FCS, but not framing bits) of between 128 and 255
* octets inclusive. Includes frames received with frame-too-long,
* FCS, or length errors.
* @rx_ttl_256_511_frms: Count of total received MAC frames with length
* (including FCS, but not framing bits) of between 256 and 511
* octets inclusive. Includes frames received with frame-too-long,
* FCS, or length errors.
* @rx_ttl_512_1023_frms: Count of total received MAC frames with length
* (including FCS, but not framing bits) of between 512 and 1023
* octets inclusive. Includes frames received with frame-too-long,
* FCS, or length errors.
* @rx_ttl_1024_1518_frms: Count of total received MAC frames with length
* (including FCS, but not framing bits) of between 1024 and 1518
* octets inclusive. Includes frames received with frame-too-long,
* FCS, or length errors.
* @rx_ttl_1519_4095_frms: Count of total received MAC frames with length
* (including FCS, but not framing bits) of between 1519 and 4095
* octets inclusive. Includes frames received with frame-too-long,
* FCS, or length errors.
* @rx_ttl_4096_8191_frms: Count of total received MAC frames with length
* (including FCS, but not framing bits) of between 4096 and 8191
* octets inclusive. Includes frames received with frame-too-long,
* FCS, or length errors.
* @rx_ttl_8192_max_frms: Count of total received MAC frames with length
* (including FCS, but not framing bits) of between 8192 and
* RX_MAX_PYLD_LEN+18 octets inclusive. Includes frames received
* with frame-too-long, FCS, or length errors.
* @rx_ttl_gt_max_frms: Count of total received MAC frames with length
* (including FCS, but not framing bits) exceeding
* RX_MAX_PYLD_LEN+18 (+22 bytes if VLAN-tagged) octets inclusive.
* Includes frames received with frame-too-long,
* FCS, or length errors.
* @rx_ip: Count of received IP datagrams. Includes errored IP datagrams.
* @rx_accepted_ip: Count of received IP datagrams that
* are passed to the system.
* @rx_ip_octets: Count of number of octets in received IP datagrams. Includes
* errored IP datagrams.
* @rx_err_ip: Count of received IP datagrams containing errors. For example,
* bad IP checksum.
* @rx_icmp: Count of received ICMP messages. Includes errored ICMP messages.
* @rx_tcp: Count of received TCP segments. Includes errored TCP segments.
* Note: This stat contains a count of all received TCP segments,
* regardless of whether or not they pertain to an established
* connection.
* @rx_udp: Count of received UDP datagrams.
* @rx_err_tcp: Count of received TCP segments containing errors. For example,
* bad TCP checksum.
* @rx_pause_count: Count of number of pause quanta that the MAC has been in
* the paused state. Recall, one pause quantum equates to 512
* bit times.
* @rx_pause_ctrl_frms: Count of received MAC PAUSE control frames.
* @rx_unsup_ctrl_frms: Count of received MAC control frames that do not
* contain the PAUSE opcode. The sum of RX_PAUSE_CTRL_FRMS and
* this register is a count of all received MAC control frames.
* Note: This stat may be configured to count all layer 2 errors
* (i.e. length errors and FCS errors).
* @rx_fcs_err_frms: Count of received MAC frames that do not pass FCS. Does
* not include frames received with frame-too-long or
* frame-too-short error.
* @rx_in_rng_len_err_frms: Count of received frames with a length/type field
* value between 46 (42 for VLAN-tagged frames) and 1500 (also 1500
* for VLAN-tagged frames), inclusive, that does not match the
* number of data octets (including pad) received. Also contains
* a count of received frames with a length/type field less than
* 46 (42 for VLAN-tagged frames) and the number of data octets
* (including pad) received is greater than 46 (42 for VLAN-tagged
* frames).
* @rx_out_rng_len_err_frms: Count of received frames with length/type field
* between 1501 and 1535 decimal, inclusive.
* @rx_drop_frms: Count of received frames that could not be passed to the host.
* See PORTn_RX_L2_MGMT_DISCARD, PORTn_RX_RPA_DISCARD,
* PORTn_RX_TRASH_DISCARD, PORTn_RX_RTS_DISCARD, PORTn_RX_RED_DISCARD
* for a list of reasons. Because the RMAC drops one frame at a time,
* this stat also indicates the number of drop events.
* @rx_discarded_frms: Count of received frames containing
* any error that prevents
* them from being passed to the system. See PORTn_RX_FCS_DISCARD,
* PORTn_RX_LEN_DISCARD, and PORTn_RX_SWITCH_DISCARD for a list of
* reasons.
* @rx_drop_ip: Count of received IP datagrams that could not be passed to the
* host. See PORTn_RX_DROP_FRMS for a list of reasons.
* @rx_drop_udp: Count of received UDP datagrams that are not delivered to the
* host. See PORTn_RX_DROP_FRMS for a list of reasons.
* @rx_marker_pdu_frms: Count of valid Marker PDUs received on this Aggregation
* port.
* @rx_lacpdu_frms: Count of valid LACPDUs received on this Aggregation port.
* @rx_unknown_pdu_frms: Count of received frames (on this Aggregation port)
* that carry the Slow Protocols EtherType, but contain an unknown
* PDU. Or frames that contain the Slow Protocols group MAC address,
* but do not carry the Slow Protocols EtherType.
* @rx_marker_resp_pdu_frms: Count of valid Marker Response PDUs received on
* this Aggregation port.
* @rx_fcs_discard: Count of received frames that are discarded because the
* FCS check failed.
* @rx_illegal_pdu_frms: Count of received frames (on this Aggregation port)
* that carry the Slow Protocols EtherType, but contain a badly
* formed PDU. Or frames that carry the Slow Protocols EtherType,
* but contain an illegal value of Protocol Subtype.
* @rx_switch_discard: Count of received frames that are discarded by the
* internal switch because they did not have an entry in the
* Filtering Database. This includes frames that had an invalid
* destination MAC address or VLAN ID. It also includes frames are
* discarded because they did not satisfy the length requirements
* of the target VPATH.
* @rx_len_discard: Count of received frames that are discarded because of an
* invalid frame length (includes fragments, oversized frames and
* mismatch between frame length and length/type field). This stat
* can be configured
* (see XMAC_STATS_GLOBAL_CFG.LEN_DISCARD_HANDLING).
* @rx_rpa_discard: Count of received frames that were discarded because the
* receive protocol assist (RPA) discovered and error in the frame
* or was unable to parse the frame.
* @rx_l2_mgmt_discard: Count of Layer 2 management frames (eg. pause frames,
* Link Aggregation Control Protocol (LACP) frames, etc.) that are
* discarded.
* @rx_rts_discard: Count of received frames that are discarded by the receive
* traffic steering (RTS) logic. Includes those frame discarded
* because the SSC response contradicted the switch table, because
* the SSC timed out, or because the target queue could not fit the
* frame.
* @rx_trash_discard: Count of received frames that are discarded because
* receive traffic steering (RTS) steered the frame to the trash
* queue.
* @rx_buff_full_discard: Count of received frames that are discarded because
* internal buffers are full. Includes frames discarded because the
* RTS logic is waiting for an SSC lookup that has no timeout bound.
* Also, includes frames that are dropped because the MAC2FAU buffer
* is nearly full -- this can happen if the external receive buffer
* is full and the receive path is backing up.
* @rx_red_discard: Count of received frames that are discarded because of RED
* (Random Early Discard).
* @rx_xgmii_ctrl_err_cnt: Maintains a count of unexpected or misplaced control
* characters occurring between times of normal data transmission
* (i.e. not included in RX_XGMII_DATA_ERR_CNT). This counter is
* incremented when either -
* 1) The Reconciliation Sublayer (RS) is expecting one control
* character and gets another (i.e. is expecting a Start
* character, but gets another control character).
* 2) Start control character is not in lane 0
* Only increments the count by one for each XGMII column.
* @rx_xgmii_data_err_cnt: Maintains a count of unexpected control characters
* during normal data transmission. If the Reconciliation Sublayer
* (RS) receives a control character, other than a terminate control
* character, during receipt of data octets then this register is
* incremented. Also increments if the start frame delimiter is not
* found in the correct location. Only increments the count by one
* for each XGMII column.
* @rx_xgmii_char1_match: Maintains a count of the number of XGMII characters
* that match a pattern that is programmable through register
* XMAC_STATS_RX_XGMII_CHAR_PORTn. By default, the pattern is set
* to /E/ (i.e. the error character), thus the statistic tracks the
* number of Error characters received at any time.
* @rx_xgmii_err_sym: Count of the number of symbol errors in the received
* XGMII data (i.e. PHY indicates "Receive Error" on the XGMII).
* Only includes symbol errors that are observed between the XGMII
* Start Frame Delimiter and End Frame Delimiter, inclusive. And
* only increments the count by one for each frame.
* @rx_xgmii_column1_match: Maintains a count of the number of XGMII columns
* that match a pattern that is programmable through register
* XMAC_STATS_RX_XGMII_COLUMN1_PORTn. By default, the pattern is set
* to 4 x /E/ (i.e. a column containing all error characters), thus
* the statistic tracks the number of Error columns received at any
* time.
* @rx_xgmii_char2_match: Maintains a count of the number of XGMII characters
* that match a pattern that is programmable through register
* XMAC_STATS_RX_XGMII_CHAR_PORTn. By default, the pattern is set
* to /E/ (i.e. the error character), thus the statistic tracks the
* number of Error characters received at any time.
* @rx_local_fault: Maintains a count of the number of times that link
* transitioned from "up" to "down" due to a local fault.
* @rx_xgmii_column2_match: Maintains a count of the number of XGMII columns
* that match a pattern that is programmable through register
* XMAC_STATS_RX_XGMII_COLUMN2_PORTn. By default, the pattern is set
* to 4 x /E/ (i.e. a column containing all error characters), thus
* the statistic tracks the number of Error columns received at any
* time. If XMAC_STATS_RX_XGMII_BEHAV_COLUMN2_PORTn.NEAR_COL1 is set
* to 1, then this stat increments when COLUMN2 is found within 'n'
* clocks after COLUMN1. Here, 'n' is defined by
* XMAC_STATS_RX_XGMII_BEHAV_COLUMN2_PORTn.NUM_COL (if 'n' is set to
* 0, then it means to search anywhere for COLUMN2).
* @rx_jettison: Count of received frames that are jettisoned because internal
* buffers are full.
* @rx_remote_fault: Maintains a count of the number of times that link
* transitioned from "up" to "down" due to a remote fault.
*
* XMAC Port Statistics.
*/
struct vxge_hw_xmac_port_stats {
/*0x000*/ u64 tx_ttl_frms;
/*0x008*/ u64 tx_ttl_octets;
/*0x010*/ u64 tx_data_octets;
/*0x018*/ u64 tx_mcast_frms;
/*0x020*/ u64 tx_bcast_frms;
/*0x028*/ u64 tx_ucast_frms;
/*0x030*/ u64 tx_tagged_frms;
/*0x038*/ u64 tx_vld_ip;
/*0x040*/ u64 tx_vld_ip_octets;
/*0x048*/ u64 tx_icmp;
/*0x050*/ u64 tx_tcp;
/*0x058*/ u64 tx_rst_tcp;
/*0x060*/ u64 tx_udp;
/*0x068*/ u32 tx_parse_error;
/*0x06c*/ u32 tx_unknown_protocol;
/*0x070*/ u64 tx_pause_ctrl_frms;
/*0x078*/ u32 tx_marker_pdu_frms;
/*0x07c*/ u32 tx_lacpdu_frms;
/*0x080*/ u32 tx_drop_ip;
/*0x084*/ u32 tx_marker_resp_pdu_frms;
/*0x088*/ u32 tx_xgmii_char2_match;
/*0x08c*/ u32 tx_xgmii_char1_match;
/*0x090*/ u32 tx_xgmii_column2_match;
/*0x094*/ u32 tx_xgmii_column1_match;
/*0x098*/ u32 unused1;
/*0x09c*/ u16 tx_any_err_frms;
/*0x09e*/ u16 tx_drop_frms;
/*0x0a0*/ u64 rx_ttl_frms;
/*0x0a8*/ u64 rx_vld_frms;
/*0x0b0*/ u64 rx_offload_frms;
/*0x0b8*/ u64 rx_ttl_octets;
/*0x0c0*/ u64 rx_data_octets;
/*0x0c8*/ u64 rx_offload_octets;
/*0x0d0*/ u64 rx_vld_mcast_frms;
/*0x0d8*/ u64 rx_vld_bcast_frms;
/*0x0e0*/ u64 rx_accepted_ucast_frms;
/*0x0e8*/ u64 rx_accepted_nucast_frms;
/*0x0f0*/ u64 rx_tagged_frms;
/*0x0f8*/ u64 rx_long_frms;
/*0x100*/ u64 rx_usized_frms;
/*0x108*/ u64 rx_osized_frms;
/*0x110*/ u64 rx_frag_frms;
/*0x118*/ u64 rx_jabber_frms;
/*0x120*/ u64 rx_ttl_64_frms;
/*0x128*/ u64 rx_ttl_65_127_frms;
/*0x130*/ u64 rx_ttl_128_255_frms;
/*0x138*/ u64 rx_ttl_256_511_frms;
/*0x140*/ u64 rx_ttl_512_1023_frms;
/*0x148*/ u64 rx_ttl_1024_1518_frms;
/*0x150*/ u64 rx_ttl_1519_4095_frms;
/*0x158*/ u64 rx_ttl_4096_8191_frms;
/*0x160*/ u64 rx_ttl_8192_max_frms;
/*0x168*/ u64 rx_ttl_gt_max_frms;
/*0x170*/ u64 rx_ip;
/*0x178*/ u64 rx_accepted_ip;
/*0x180*/ u64 rx_ip_octets;
/*0x188*/ u64 rx_err_ip;
/*0x190*/ u64 rx_icmp;
/*0x198*/ u64 rx_tcp;
/*0x1a0*/ u64 rx_udp;
/*0x1a8*/ u64 rx_err_tcp;
/*0x1b0*/ u64 rx_pause_count;
/*0x1b8*/ u64 rx_pause_ctrl_frms;
/*0x1c0*/ u64 rx_unsup_ctrl_frms;
/*0x1c8*/ u64 rx_fcs_err_frms;
/*0x1d0*/ u64 rx_in_rng_len_err_frms;
/*0x1d8*/ u64 rx_out_rng_len_err_frms;
/*0x1e0*/ u64 rx_drop_frms;
/*0x1e8*/ u64 rx_discarded_frms;
/*0x1f0*/ u64 rx_drop_ip;
/*0x1f8*/ u64 rx_drop_udp;
/*0x200*/ u32 rx_marker_pdu_frms;
/*0x204*/ u32 rx_lacpdu_frms;
/*0x208*/ u32 rx_unknown_pdu_frms;
/*0x20c*/ u32 rx_marker_resp_pdu_frms;
/*0x210*/ u32 rx_fcs_discard;
/*0x214*/ u32 rx_illegal_pdu_frms;
/*0x218*/ u32 rx_switch_discard;
/*0x21c*/ u32 rx_len_discard;
/*0x220*/ u32 rx_rpa_discard;
/*0x224*/ u32 rx_l2_mgmt_discard;
/*0x228*/ u32 rx_rts_discard;
/*0x22c*/ u32 rx_trash_discard;
/*0x230*/ u32 rx_buff_full_discard;
/*0x234*/ u32 rx_red_discard;
/*0x238*/ u32 rx_xgmii_ctrl_err_cnt;
/*0x23c*/ u32 rx_xgmii_data_err_cnt;
/*0x240*/ u32 rx_xgmii_char1_match;
/*0x244*/ u32 rx_xgmii_err_sym;
/*0x248*/ u32 rx_xgmii_column1_match;
/*0x24c*/ u32 rx_xgmii_char2_match;
/*0x250*/ u32 rx_local_fault;
/*0x254*/ u32 rx_xgmii_column2_match;
/*0x258*/ u32 rx_jettison;
/*0x25c*/ u32 rx_remote_fault;
} __packed;
/**
* struct vxge_hw_xmac_vpath_tx_stats - XMAC Vpath Tx Statistics
*
* @tx_ttl_eth_frms: Count of successfully transmitted MAC frames.
* @tx_ttl_eth_octets: Count of total octets of transmitted frames,
* not including framing characters (i.e. less framing bits).
* To determine the total octets of transmitted frames, including
* framing characters, multiply TX_TTL_ETH_FRMS by 8 and add it to
* this stat (the device always prepends 8 bytes of preamble for
* each frame)
* @tx_data_octets: Count of data and padding octets of successfully transmitted
* frames.
* @tx_mcast_frms: Count of successfully transmitted frames to a group address
* other than the broadcast address.
* @tx_bcast_frms: Count of successfully transmitted frames to the broadcast
* group address.
* @tx_ucast_frms: Count of transmitted frames containing a unicast address.
* Includes discarded frames that are not sent to the network.
* @tx_tagged_frms: Count of transmitted frames containing a VLAN tag.
* @tx_vld_ip: Count of transmitted IP datagrams that are passed to the network.
* @tx_vld_ip_octets: Count of total octets of transmitted IP datagrams that
* are passed to the network.
* @tx_icmp: Count of transmitted ICMP messages. Includes messages not sent due
* to problems within ICMP.
* @tx_tcp: Count of transmitted TCP segments. Does not include segments
* containing retransmitted octets.
* @tx_rst_tcp: Count of transmitted TCP segments containing the RST flag.
* @tx_udp: Count of transmitted UDP datagrams.
* @tx_unknown_protocol: Increments when the TPA encounters an unknown protocol,
* such as a new IPv6 extension header, or an unsupported Routing
* Type. The packet still has a checksum calculated but it may be
* incorrect.
* @tx_lost_ip: Count of transmitted IP datagrams that could not be passed
* to the network. Increments because of: 1) An internal processing
* error (such as an uncorrectable ECC error). 2) A frame parsing
* error during IP checksum calculation.
* @tx_parse_error: Increments when the TPA is unable to parse a packet. This
* generally occurs when a packet is corrupt somehow, including
* packets that have IP version mismatches, invalid Layer 2 control
* fields, etc. L3/L4 checksums are not offloaded, but the packet
* is still be transmitted.
* @tx_tcp_offload: For frames belonging to offloaded sessions only, a count
* of transmitted TCP segments. Does not include segments containing
* retransmitted octets.
* @tx_retx_tcp_offload: For frames belonging to offloaded sessions only, the
* total number of segments retransmitted. Retransmitted segments
* that are sourced by the host are counted by the host.
* @tx_lost_ip_offload: For frames belonging to offloaded sessions only, a count
* of transmitted IP datagrams that could not be passed to the
* network.
*
* XMAC Vpath TX Statistics.
*/
struct vxge_hw_xmac_vpath_tx_stats {
u64 tx_ttl_eth_frms;
u64 tx_ttl_eth_octets;
u64 tx_data_octets;
u64 tx_mcast_frms;
u64 tx_bcast_frms;
u64 tx_ucast_frms;
u64 tx_tagged_frms;
u64 tx_vld_ip;
u64 tx_vld_ip_octets;
u64 tx_icmp;
u64 tx_tcp;
u64 tx_rst_tcp;
u64 tx_udp;
u32 tx_unknown_protocol;
u32 tx_lost_ip;
u32 unused1;
u32 tx_parse_error;
u64 tx_tcp_offload;
u64 tx_retx_tcp_offload;
u64 tx_lost_ip_offload;
} __packed;
/**
* struct vxge_hw_xmac_vpath_rx_stats - XMAC Vpath RX Statistics
*
* @rx_ttl_eth_frms: Count of successfully received MAC frames.
* @rx_vld_frms: Count of successfully received MAC frames. Does not include
* frames received with frame-too-long, FCS, or length errors.
* @rx_offload_frms: Count of offloaded received frames that are passed to
* the host.
* @rx_ttl_eth_octets: Count of total octets of received frames, not including
* framing characters (i.e. less framing bits). Only counts octets
* of frames that are at least 14 bytes (18 bytes for VLAN-tagged)
* before FCS. To determine the total octets of received frames,
* including framing characters, multiply RX_TTL_ETH_FRMS by 8 and
* add it to this stat (the stat RX_TTL_ETH_FRMS only counts frames
* that have the required 8 bytes of preamble).
* @rx_data_octets: Count of data and padding octets of successfully received
* frames. Does not include frames received with frame-too-long,
* FCS, or length errors.
* @rx_offload_octets: Count of total octets, not including framing characters,
* of offloaded received frames that are passed to the host.
* @rx_vld_mcast_frms: Count of successfully received MAC frames containing a
* nonbroadcast group address. Does not include frames received with
* frame-too-long, FCS, or length errors.
* @rx_vld_bcast_frms: Count of successfully received MAC frames containing the
* broadcast group address. Does not include frames received with
* frame-too-long, FCS, or length errors.
* @rx_accepted_ucast_frms: Count of successfully received frames containing
* a unicast address. Only includes frames that are passed to the
* system.
* @rx_accepted_nucast_frms: Count of successfully received frames containing
* a non-unicast (broadcast or multicast) address. Only includes
* frames that are passed to the system. Could include, for instance,
* non-unicast frames that contain FCS errors if the MAC_ERROR_CFG
* register is set to pass FCS-errored frames to the host.
* @rx_tagged_frms: Count of received frames containing a VLAN tag.
* @rx_long_frms: Count of received frames that are longer than RX_MAX_PYLD_LEN
* + 18 bytes (+ 22 bytes if VLAN-tagged).
* @rx_usized_frms: Count of received frames of length (including FCS, but not
* framing bits) less than 64 octets, that are otherwise well-formed.
* In other words, counts runts.
* @rx_osized_frms: Count of received frames of length (including FCS, but not
* framing bits) more than 1518 octets, that are otherwise
* well-formed.
* @rx_frag_frms: Count of received frames of length (including FCS, but not
* framing bits) less than 64 octets that had bad FCS.
* In other words, counts fragments.
* @rx_jabber_frms: Count of received frames of length (including FCS, but not
* framing bits) more than 1518 octets that had bad FCS. In other
* words, counts jabbers.
* @rx_ttl_64_frms: Count of total received MAC frames with length (including
* FCS, but not framing bits) of exactly 64 octets. Includes frames
* received with frame-too-long, FCS, or length errors.
* @rx_ttl_65_127_frms: Count of total received MAC frames
* with length (including
* FCS, but not framing bits) of between 65 and 127 octets inclusive.
* Includes frames received with frame-too-long, FCS,
* or length errors.
* @rx_ttl_128_255_frms: Count of total received MAC frames with length
* (including FCS, but not framing bits)
* of between 128 and 255 octets
* inclusive. Includes frames received with frame-too-long, FCS,
* or length errors.
* @rx_ttl_256_511_frms: Count of total received MAC frames with length
* (including FCS, but not framing bits)
* of between 256 and 511 octets
* inclusive. Includes frames received with frame-too-long, FCS, or
* length errors.
* @rx_ttl_512_1023_frms: Count of total received MAC frames with length
* (including FCS, but not framing bits) of between 512 and 1023
* octets inclusive. Includes frames received with frame-too-long,
* FCS, or length errors.
* @rx_ttl_1024_1518_frms: Count of total received MAC frames with length
* (including FCS, but not framing bits) of between 1024 and 1518
* octets inclusive. Includes frames received with frame-too-long,
* FCS, or length errors.
* @rx_ttl_1519_4095_frms: Count of total received MAC frames with length
* (including FCS, but not framing bits) of between 1519 and 4095
* octets inclusive. Includes frames received with frame-too-long,
* FCS, or length errors.
* @rx_ttl_4096_8191_frms: Count of total received MAC frames with length
* (including FCS, but not framing bits) of between 4096 and 8191
* octets inclusive. Includes frames received with frame-too-long,
* FCS, or length errors.
* @rx_ttl_8192_max_frms: Count of total received MAC frames with length
* (including FCS, but not framing bits) of between 8192 and
* RX_MAX_PYLD_LEN+18 octets inclusive. Includes frames received
* with frame-too-long, FCS, or length errors.
* @rx_ttl_gt_max_frms: Count of total received MAC frames with length
* (including FCS, but not framing bits) exceeding RX_MAX_PYLD_LEN+18
* (+22 bytes if VLAN-tagged) octets inclusive. Includes frames
* received with frame-too-long, FCS, or length errors.
* @rx_ip: Count of received IP datagrams. Includes errored IP datagrams.
* @rx_accepted_ip: Count of received IP datagrams that
* are passed to the system.
* @rx_ip_octets: Count of number of octets in received IP datagrams.
* Includes errored IP datagrams.
* @rx_err_ip: Count of received IP datagrams containing errors. For example,
* bad IP checksum.
* @rx_icmp: Count of received ICMP messages. Includes errored ICMP messages.
* @rx_tcp: Count of received TCP segments. Includes errored TCP segments.
* Note: This stat contains a count of all received TCP segments,
* regardless of whether or not they pertain to an established
* connection.
* @rx_udp: Count of received UDP datagrams.
* @rx_err_tcp: Count of received TCP segments containing errors. For example,
* bad TCP checksum.
* @rx_lost_frms: Count of received frames that could not be passed to the host.
* See RX_QUEUE_FULL_DISCARD and RX_RED_DISCARD
* for a list of reasons.
* @rx_lost_ip: Count of received IP datagrams that could not be passed to
* the host. See RX_LOST_FRMS for a list of reasons.
* @rx_lost_ip_offload: For frames belonging to offloaded sessions only, a count
* of received IP datagrams that could not be passed to the host.
* See RX_LOST_FRMS for a list of reasons.
* @rx_various_discard: Count of received frames that are discarded because
* the target receive queue is full.
* @rx_sleep_discard: Count of received frames that are discarded because the
* target VPATH is asleep (a Wake-on-LAN magic packet can be used
* to awaken the VPATH).
* @rx_red_discard: Count of received frames that are discarded because of RED
* (Random Early Discard).
* @rx_queue_full_discard: Count of received frames that are discarded because
* the target receive queue is full.
* @rx_mpa_ok_frms: Count of received frames that pass the MPA checks.
*
* XMAC Vpath RX Statistics.
*/
struct vxge_hw_xmac_vpath_rx_stats {
u64 rx_ttl_eth_frms;
u64 rx_vld_frms;
u64 rx_offload_frms;
u64 rx_ttl_eth_octets;
u64 rx_data_octets;
u64 rx_offload_octets;
u64 rx_vld_mcast_frms;
u64 rx_vld_bcast_frms;
u64 rx_accepted_ucast_frms;
u64 rx_accepted_nucast_frms;
u64 rx_tagged_frms;
u64 rx_long_frms;
u64 rx_usized_frms;
u64 rx_osized_frms;
u64 rx_frag_frms;
u64 rx_jabber_frms;
u64 rx_ttl_64_frms;
u64 rx_ttl_65_127_frms;
u64 rx_ttl_128_255_frms;
u64 rx_ttl_256_511_frms;
u64 rx_ttl_512_1023_frms;
u64 rx_ttl_1024_1518_frms;
u64 rx_ttl_1519_4095_frms;
u64 rx_ttl_4096_8191_frms;
u64 rx_ttl_8192_max_frms;
u64 rx_ttl_gt_max_frms;
u64 rx_ip;
u64 rx_accepted_ip;
u64 rx_ip_octets;
u64 rx_err_ip;
u64 rx_icmp;
u64 rx_tcp;
u64 rx_udp;
u64 rx_err_tcp;
u64 rx_lost_frms;
u64 rx_lost_ip;
u64 rx_lost_ip_offload;
u16 rx_various_discard;
u16 rx_sleep_discard;
u16 rx_red_discard;
u16 rx_queue_full_discard;
u64 rx_mpa_ok_frms;
} __packed;
/**
* struct vxge_hw_xmac_stats - XMAC Statistics
*
* @aggr_stats: Statistics on aggregate port(port 0, port 1)
* @port_stats: Staticstics on ports(wire 0, wire 1, lag)
* @vpath_tx_stats: Per vpath XMAC TX stats
* @vpath_rx_stats: Per vpath XMAC RX stats
*
* XMAC Statistics.
*/
struct vxge_hw_xmac_stats {
struct vxge_hw_xmac_aggr_stats
aggr_stats[VXGE_HW_MAC_MAX_MAC_PORT_ID];
struct vxge_hw_xmac_port_stats
port_stats[VXGE_HW_MAC_MAX_MAC_PORT_ID+1];
struct vxge_hw_xmac_vpath_tx_stats
vpath_tx_stats[VXGE_HW_MAX_VIRTUAL_PATHS];
struct vxge_hw_xmac_vpath_rx_stats
vpath_rx_stats[VXGE_HW_MAX_VIRTUAL_PATHS];
};
/**
* struct vxge_hw_vpath_stats_hw_info - Titan vpath hardware statistics.
* @ini_num_mwr_sent: The number of PCI memory writes initiated by the PIC block
* for the given VPATH
* @ini_num_mrd_sent: The number of PCI memory reads initiated by the PIC block
* @ini_num_cpl_rcvd: The number of PCI read completions received by the
* PIC block
* @ini_num_mwr_byte_sent: The number of PCI memory write bytes sent by the PIC
* block to the host
* @ini_num_cpl_byte_rcvd: The number of PCI read completion bytes received by
* the PIC block
* @wrcrdtarb_xoff: TBD
* @rdcrdtarb_xoff: TBD
* @vpath_genstats_count0: TBD
* @vpath_genstats_count1: TBD
* @vpath_genstats_count2: TBD
* @vpath_genstats_count3: TBD
* @vpath_genstats_count4: TBD
* @vpath_gennstats_count5: TBD
* @tx_stats: Transmit stats
* @rx_stats: Receive stats
* @prog_event_vnum1: Programmable statistic. Increments when internal logic
* detects a certain event. See register
* XMAC_STATS_CFG.EVENT_VNUM1_CFG for more information.
* @prog_event_vnum0: Programmable statistic. Increments when internal logic
* detects a certain event. See register
* XMAC_STATS_CFG.EVENT_VNUM0_CFG for more information.
* @prog_event_vnum3: Programmable statistic. Increments when internal logic
* detects a certain event. See register
* XMAC_STATS_CFG.EVENT_VNUM3_CFG for more information.
* @prog_event_vnum2: Programmable statistic. Increments when internal logic
* detects a certain event. See register
* XMAC_STATS_CFG.EVENT_VNUM2_CFG for more information.
* @rx_multi_cast_frame_discard: TBD
* @rx_frm_transferred: TBD
* @rxd_returned: TBD
* @rx_mpa_len_fail_frms: Count of received frames
* that fail the MPA length check
* @rx_mpa_mrk_fail_frms: Count of received frames
* that fail the MPA marker check
* @rx_mpa_crc_fail_frms: Count of received frames that fail the MPA CRC check
* @rx_permitted_frms: Count of frames that pass through the FAU and on to the
* frame buffer (and subsequently to the host).
* @rx_vp_reset_discarded_frms: Count of receive frames that are discarded
* because the VPATH is in reset
* @rx_wol_frms: Count of received "magic packet" frames. Stat increments
* whenever the received frame matches the VPATH's Wake-on-LAN
* signature(s) CRC.
* @tx_vp_reset_discarded_frms: Count of transmit frames that are discarded
* because the VPATH is in reset. Includes frames that are discarded
* because the current VPIN does not match that VPIN of the frame
*
* Titan vpath hardware statistics.
*/
struct vxge_hw_vpath_stats_hw_info {
/*0x000*/ u32 ini_num_mwr_sent;
/*0x004*/ u32 unused1;
/*0x008*/ u32 ini_num_mrd_sent;
/*0x00c*/ u32 unused2;
/*0x010*/ u32 ini_num_cpl_rcvd;
/*0x014*/ u32 unused3;
/*0x018*/ u64 ini_num_mwr_byte_sent;
/*0x020*/ u64 ini_num_cpl_byte_rcvd;
/*0x028*/ u32 wrcrdtarb_xoff;
/*0x02c*/ u32 unused4;
/*0x030*/ u32 rdcrdtarb_xoff;
/*0x034*/ u32 unused5;
/*0x038*/ u32 vpath_genstats_count0;
/*0x03c*/ u32 vpath_genstats_count1;
/*0x040*/ u32 vpath_genstats_count2;
/*0x044*/ u32 vpath_genstats_count3;
/*0x048*/ u32 vpath_genstats_count4;
/*0x04c*/ u32 unused6;
/*0x050*/ u32 vpath_genstats_count5;
/*0x054*/ u32 unused7;
/*0x058*/ struct vxge_hw_xmac_vpath_tx_stats tx_stats;
/*0x0e8*/ struct vxge_hw_xmac_vpath_rx_stats rx_stats;
/*0x220*/ u64 unused9;
/*0x228*/ u32 prog_event_vnum1;
/*0x22c*/ u32 prog_event_vnum0;
/*0x230*/ u32 prog_event_vnum3;
/*0x234*/ u32 prog_event_vnum2;
/*0x238*/ u16 rx_multi_cast_frame_discard;
/*0x23a*/ u8 unused10[6];
/*0x240*/ u32 rx_frm_transferred;
/*0x244*/ u32 unused11;
/*0x248*/ u16 rxd_returned;
/*0x24a*/ u8 unused12[6];
/*0x252*/ u16 rx_mpa_len_fail_frms;
/*0x254*/ u16 rx_mpa_mrk_fail_frms;
/*0x256*/ u16 rx_mpa_crc_fail_frms;
/*0x258*/ u16 rx_permitted_frms;
/*0x25c*/ u64 rx_vp_reset_discarded_frms;
/*0x25e*/ u64 rx_wol_frms;
/*0x260*/ u64 tx_vp_reset_discarded_frms;
} __packed;
/**
* struct vxge_hw_device_stats_mrpcim_info - Titan mrpcim hardware statistics.
* @pic.ini_rd_drop 0x0000 4 Number of DMA reads initiated
* by the adapter that were discarded because the VPATH is out of service
* @pic.ini_wr_drop 0x0004 4 Number of DMA writes initiated by the
* adapter that were discared because the VPATH is out of service
* @pic.wrcrdtarb_ph_crdt_depleted[vplane0] 0x0008 4 Number of times
* the posted header credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_ph_crdt_depleted[vplane1] 0x0010 4 Number of times
* the posted header credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_ph_crdt_depleted[vplane2] 0x0018 4 Number of times
* the posted header credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_ph_crdt_depleted[vplane3] 0x0020 4 Number of times
* the posted header credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_ph_crdt_depleted[vplane4] 0x0028 4 Number of times
* the posted header credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_ph_crdt_depleted[vplane5] 0x0030 4 Number of times
* the posted header credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_ph_crdt_depleted[vplane6] 0x0038 4 Number of times
* the posted header credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_ph_crdt_depleted[vplane7] 0x0040 4 Number of times
* the posted header credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_ph_crdt_depleted[vplane8] 0x0048 4 Number of times
* the posted header credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_ph_crdt_depleted[vplane9] 0x0050 4 Number of times
* the posted header credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_ph_crdt_depleted[vplane10] 0x0058 4 Number of times
* the posted header credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_ph_crdt_depleted[vplane11] 0x0060 4 Number of times
* the posted header credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_ph_crdt_depleted[vplane12] 0x0068 4 Number of times
* the posted header credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_ph_crdt_depleted[vplane13] 0x0070 4 Number of times
* the posted header credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_ph_crdt_depleted[vplane14] 0x0078 4 Number of times
* the posted header credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_ph_crdt_depleted[vplane15] 0x0080 4 Number of times
* the posted header credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_ph_crdt_depleted[vplane16] 0x0088 4 Number of times
* the posted header credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_pd_crdt_depleted[vplane0] 0x0090 4 Number of times
* the posted data credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_pd_crdt_depleted[vplane1] 0x0098 4 Number of times
* the posted data credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_pd_crdt_depleted[vplane2] 0x00a0 4 Number of times
* the posted data credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_pd_crdt_depleted[vplane3] 0x00a8 4 Number of times
* the posted data credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_pd_crdt_depleted[vplane4] 0x00b0 4 Number of times
* the posted data credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_pd_crdt_depleted[vplane5] 0x00b8 4 Number of times
* the posted data credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_pd_crdt_depleted[vplane6] 0x00c0 4 Number of times
* the posted data credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_pd_crdt_depleted[vplane7] 0x00c8 4 Number of times
* the posted data credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_pd_crdt_depleted[vplane8] 0x00d0 4 Number of times
* the posted data credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_pd_crdt_depleted[vplane9] 0x00d8 4 Number of times
* the posted data credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_pd_crdt_depleted[vplane10] 0x00e0 4 Number of times
* the posted data credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_pd_crdt_depleted[vplane11] 0x00e8 4 Number of times
* the posted data credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_pd_crdt_depleted[vplane12] 0x00f0 4 Number of times
* the posted data credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_pd_crdt_depleted[vplane13] 0x00f8 4 Number of times
* the posted data credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_pd_crdt_depleted[vplane14] 0x0100 4 Number of times
* the posted data credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_pd_crdt_depleted[vplane15] 0x0108 4 Number of times
* the posted data credits for upstream PCI writes were depleted
* @pic.wrcrdtarb_pd_crdt_depleted[vplane16] 0x0110 4 Number of times
* the posted data credits for upstream PCI writes were depleted
* @pic.rdcrdtarb_nph_crdt_depleted[vplane0] 0x0118 4 Number of times
* the non-posted header credits for upstream PCI reads were depleted
* @pic.rdcrdtarb_nph_crdt_depleted[vplane1] 0x0120 4 Number of times
* the non-posted header credits for upstream PCI reads were depleted
* @pic.rdcrdtarb_nph_crdt_depleted[vplane2] 0x0128 4 Number of times
* the non-posted header credits for upstream PCI reads were depleted
* @pic.rdcrdtarb_nph_crdt_depleted[vplane3] 0x0130 4 Number of times
* the non-posted header credits for upstream PCI reads were depleted
* @pic.rdcrdtarb_nph_crdt_depleted[vplane4] 0x0138 4 Number of times
* the non-posted header credits for upstream PCI reads were depleted
* @pic.rdcrdtarb_nph_crdt_depleted[vplane5] 0x0140 4 Number of times
* the non-posted header credits for upstream PCI reads were depleted
* @pic.rdcrdtarb_nph_crdt_depleted[vplane6] 0x0148 4 Number of times
* the non-posted header credits for upstream PCI reads were depleted
* @pic.rdcrdtarb_nph_crdt_depleted[vplane7] 0x0150 4 Number of times
* the non-posted header credits for upstream PCI reads were depleted
* @pic.rdcrdtarb_nph_crdt_depleted[vplane8] 0x0158 4 Number of times
* the non-posted header credits for upstream PCI reads were depleted
* @pic.rdcrdtarb_nph_crdt_depleted[vplane9] 0x0160 4 Number of times
* the non-posted header credits for upstream PCI reads were depleted
* @pic.rdcrdtarb_nph_crdt_depleted[vplane10] 0x0168 4 Number of times
* the non-posted header credits for upstream PCI reads were depleted
* @pic.rdcrdtarb_nph_crdt_depleted[vplane11] 0x0170 4 Number of times
* the non-posted header credits for upstream PCI reads were depleted
* @pic.rdcrdtarb_nph_crdt_depleted[vplane12] 0x0178 4 Number of times
* the non-posted header credits for upstream PCI reads were depleted
* @pic.rdcrdtarb_nph_crdt_depleted[vplane13] 0x0180 4 Number of times
* the non-posted header credits for upstream PCI reads were depleted
* @pic.rdcrdtarb_nph_crdt_depleted[vplane14] 0x0188 4 Number of times
* the non-posted header credits for upstream PCI reads were depleted
* @pic.rdcrdtarb_nph_crdt_depleted[vplane15] 0x0190 4 Number of times
* the non-posted header credits for upstream PCI reads were depleted
* @pic.rdcrdtarb_nph_crdt_depleted[vplane16] 0x0198 4 Number of times
* the non-posted header credits for upstream PCI reads were depleted
* @pic.ini_rd_vpin_drop 0x01a0 4 Number of DMA reads initiated by
* the adapter that were discarded because the VPATH instance number does
* not match
* @pic.ini_wr_vpin_drop 0x01a4 4 Number of DMA writes initiated
* by the adapter that were discarded because the VPATH instance number
* does not match
* @pic.genstats_count0 0x01a8 4 Configurable statistic #1. Refer
* to the GENSTATS0_CFG for information on configuring this statistic
* @pic.genstats_count1 0x01ac 4 Configurable statistic #2. Refer
* to the GENSTATS1_CFG for information on configuring this statistic
* @pic.genstats_count2 0x01b0 4 Configurable statistic #3. Refer
* to the GENSTATS2_CFG for information on configuring this statistic
* @pic.genstats_count3 0x01b4 4 Configurable statistic #4. Refer
* to the GENSTATS3_CFG for information on configuring this statistic
* @pic.genstats_count4 0x01b8 4 Configurable statistic #5. Refer
* to the GENSTATS4_CFG for information on configuring this statistic
* @pic.genstats_count5 0x01c0 4 Configurable statistic #6. Refer
* to the GENSTATS5_CFG for information on configuring this statistic
* @pci.rstdrop_cpl 0x01c8 4
* @pci.rstdrop_msg 0x01cc 4
* @pci.rstdrop_client1 0x01d0 4
* @pci.rstdrop_client0 0x01d4 4
* @pci.rstdrop_client2 0x01d8 4
* @pci.depl_cplh[vplane0] 0x01e2 2 Number of times completion
* header credits were depleted
* @pci.depl_nph[vplane0] 0x01e4 2 Number of times non posted
* header credits were depleted
* @pci.depl_ph[vplane0] 0x01e6 2 Number of times the posted
* header credits were depleted
* @pci.depl_cplh[vplane1] 0x01ea 2
* @pci.depl_nph[vplane1] 0x01ec 2
* @pci.depl_ph[vplane1] 0x01ee 2
* @pci.depl_cplh[vplane2] 0x01f2 2
* @pci.depl_nph[vplane2] 0x01f4 2
* @pci.depl_ph[vplane2] 0x01f6 2
* @pci.depl_cplh[vplane3] 0x01fa 2
* @pci.depl_nph[vplane3] 0x01fc 2
* @pci.depl_ph[vplane3] 0x01fe 2
* @pci.depl_cplh[vplane4] 0x0202 2
* @pci.depl_nph[vplane4] 0x0204 2
* @pci.depl_ph[vplane4] 0x0206 2
* @pci.depl_cplh[vplane5] 0x020a 2
* @pci.depl_nph[vplane5] 0x020c 2
* @pci.depl_ph[vplane5] 0x020e 2
* @pci.depl_cplh[vplane6] 0x0212 2
* @pci.depl_nph[vplane6] 0x0214 2
* @pci.depl_ph[vplane6] 0x0216 2
* @pci.depl_cplh[vplane7] 0x021a 2
* @pci.depl_nph[vplane7] 0x021c 2
* @pci.depl_ph[vplane7] 0x021e 2
* @pci.depl_cplh[vplane8] 0x0222 2
* @pci.depl_nph[vplane8] 0x0224 2
* @pci.depl_ph[vplane8] 0x0226 2
* @pci.depl_cplh[vplane9] 0x022a 2
* @pci.depl_nph[vplane9] 0x022c 2
* @pci.depl_ph[vplane9] 0x022e 2
* @pci.depl_cplh[vplane10] 0x0232 2
* @pci.depl_nph[vplane10] 0x0234 2
* @pci.depl_ph[vplane10] 0x0236 2
* @pci.depl_cplh[vplane11] 0x023a 2
* @pci.depl_nph[vplane11] 0x023c 2
* @pci.depl_ph[vplane11] 0x023e 2
* @pci.depl_cplh[vplane12] 0x0242 2
* @pci.depl_nph[vplane12] 0x0244 2
* @pci.depl_ph[vplane12] 0x0246 2
* @pci.depl_cplh[vplane13] 0x024a 2
* @pci.depl_nph[vplane13] 0x024c 2
* @pci.depl_ph[vplane13] 0x024e 2
* @pci.depl_cplh[vplane14] 0x0252 2
* @pci.depl_nph[vplane14] 0x0254 2
* @pci.depl_ph[vplane14] 0x0256 2
* @pci.depl_cplh[vplane15] 0x025a 2
* @pci.depl_nph[vplane15] 0x025c 2
* @pci.depl_ph[vplane15] 0x025e 2
* @pci.depl_cplh[vplane16] 0x0262 2
* @pci.depl_nph[vplane16] 0x0264 2
* @pci.depl_ph[vplane16] 0x0266 2
* @pci.depl_cpld[vplane0] 0x026a 2 Number of times completion data
* credits were depleted
* @pci.depl_npd[vplane0] 0x026c 2 Number of times non posted data
* credits were depleted
* @pci.depl_pd[vplane0] 0x026e 2 Number of times the posted data
* credits were depleted
* @pci.depl_cpld[vplane1] 0x0272 2
* @pci.depl_npd[vplane1] 0x0274 2
* @pci.depl_pd[vplane1] 0x0276 2
* @pci.depl_cpld[vplane2] 0x027a 2
* @pci.depl_npd[vplane2] 0x027c 2
* @pci.depl_pd[vplane2] 0x027e 2
* @pci.depl_cpld[vplane3] 0x0282 2
* @pci.depl_npd[vplane3] 0x0284 2
* @pci.depl_pd[vplane3] 0x0286 2
* @pci.depl_cpld[vplane4] 0x028a 2
* @pci.depl_npd[vplane4] 0x028c 2
* @pci.depl_pd[vplane4] 0x028e 2
* @pci.depl_cpld[vplane5] 0x0292 2
* @pci.depl_npd[vplane5] 0x0294 2
* @pci.depl_pd[vplane5] 0x0296 2
* @pci.depl_cpld[vplane6] 0x029a 2
* @pci.depl_npd[vplane6] 0x029c 2
* @pci.depl_pd[vplane6] 0x029e 2
* @pci.depl_cpld[vplane7] 0x02a2 2
* @pci.depl_npd[vplane7] 0x02a4 2
* @pci.depl_pd[vplane7] 0x02a6 2
* @pci.depl_cpld[vplane8] 0x02aa 2
* @pci.depl_npd[vplane8] 0x02ac 2
* @pci.depl_pd[vplane8] 0x02ae 2
* @pci.depl_cpld[vplane9] 0x02b2 2
* @pci.depl_npd[vplane9] 0x02b4 2
* @pci.depl_pd[vplane9] 0x02b6 2
* @pci.depl_cpld[vplane10] 0x02ba 2
* @pci.depl_npd[vplane10] 0x02bc 2
* @pci.depl_pd[vplane10] 0x02be 2
* @pci.depl_cpld[vplane11] 0x02c2 2
* @pci.depl_npd[vplane11] 0x02c4 2
* @pci.depl_pd[vplane11] 0x02c6 2
* @pci.depl_cpld[vplane12] 0x02ca 2
* @pci.depl_npd[vplane12] 0x02cc 2
* @pci.depl_pd[vplane12] 0x02ce 2
* @pci.depl_cpld[vplane13] 0x02d2 2
* @pci.depl_npd[vplane13] 0x02d4 2
* @pci.depl_pd[vplane13] 0x02d6 2
* @pci.depl_cpld[vplane14] 0x02da 2
* @pci.depl_npd[vplane14] 0x02dc 2
* @pci.depl_pd[vplane14] 0x02de 2
* @pci.depl_cpld[vplane15] 0x02e2 2
* @pci.depl_npd[vplane15] 0x02e4 2
* @pci.depl_pd[vplane15] 0x02e6 2
* @pci.depl_cpld[vplane16] 0x02ea 2
* @pci.depl_npd[vplane16] 0x02ec 2
* @pci.depl_pd[vplane16] 0x02ee 2
* @xgmac_port[3];
* @xgmac_aggr[2];
* @xgmac.global_prog_event_gnum0 0x0ae0 8 Programmable statistic.
* Increments when internal logic detects a certain event. See register
* XMAC_STATS_GLOBAL_CFG.EVENT_GNUM0_CFG for more information.
* @xgmac.global_prog_event_gnum1 0x0ae8 8 Programmable statistic.
* Increments when internal logic detects a certain event. See register
* XMAC_STATS_GLOBAL_CFG.EVENT_GNUM1_CFG for more information.
* @xgmac.orp_lro_events 0x0af8 8
* @xgmac.orp_bs_events 0x0b00 8
* @xgmac.orp_iwarp_events 0x0b08 8
* @xgmac.tx_permitted_frms 0x0b14 4
* @xgmac.port2_tx_any_frms 0x0b1d 1
* @xgmac.port1_tx_any_frms 0x0b1e 1
* @xgmac.port0_tx_any_frms 0x0b1f 1
* @xgmac.port2_rx_any_frms 0x0b25 1
* @xgmac.port1_rx_any_frms 0x0b26 1
* @xgmac.port0_rx_any_frms 0x0b27 1
*
* Titan mrpcim hardware statistics.
*/
struct vxge_hw_device_stats_mrpcim_info {
/*0x0000*/ u32 pic_ini_rd_drop;
/*0x0004*/ u32 pic_ini_wr_drop;
/*0x0008*/ struct {
/*0x0000*/ u32 pic_wrcrdtarb_ph_crdt_depleted;
/*0x0004*/ u32 unused1;
} pic_wrcrdtarb_ph_crdt_depleted_vplane[17];
/*0x0090*/ struct {
/*0x0000*/ u32 pic_wrcrdtarb_pd_crdt_depleted;
/*0x0004*/ u32 unused2;
} pic_wrcrdtarb_pd_crdt_depleted_vplane[17];
/*0x0118*/ struct {
/*0x0000*/ u32 pic_rdcrdtarb_nph_crdt_depleted;
/*0x0004*/ u32 unused3;
} pic_rdcrdtarb_nph_crdt_depleted_vplane[17];
/*0x01a0*/ u32 pic_ini_rd_vpin_drop;
/*0x01a4*/ u32 pic_ini_wr_vpin_drop;
/*0x01a8*/ u32 pic_genstats_count0;
/*0x01ac*/ u32 pic_genstats_count1;
/*0x01b0*/ u32 pic_genstats_count2;
/*0x01b4*/ u32 pic_genstats_count3;
/*0x01b8*/ u32 pic_genstats_count4;
/*0x01bc*/ u32 unused4;
/*0x01c0*/ u32 pic_genstats_count5;
/*0x01c4*/ u32 unused5;
/*0x01c8*/ u32 pci_rstdrop_cpl;
/*0x01cc*/ u32 pci_rstdrop_msg;
/*0x01d0*/ u32 pci_rstdrop_client1;
/*0x01d4*/ u32 pci_rstdrop_client0;
/*0x01d8*/ u32 pci_rstdrop_client2;
/*0x01dc*/ u32 unused6;
/*0x01e0*/ struct {
/*0x0000*/ u16 unused7;
/*0x0002*/ u16 pci_depl_cplh;
/*0x0004*/ u16 pci_depl_nph;
/*0x0006*/ u16 pci_depl_ph;
} pci_depl_h_vplane[17];
/*0x0268*/ struct {
/*0x0000*/ u16 unused8;
/*0x0002*/ u16 pci_depl_cpld;
/*0x0004*/ u16 pci_depl_npd;
/*0x0006*/ u16 pci_depl_pd;
} pci_depl_d_vplane[17];
/*0x02f0*/ struct vxge_hw_xmac_port_stats xgmac_port[3];
/*0x0a10*/ struct vxge_hw_xmac_aggr_stats xgmac_aggr[2];
/*0x0ae0*/ u64 xgmac_global_prog_event_gnum0;
/*0x0ae8*/ u64 xgmac_global_prog_event_gnum1;
/*0x0af0*/ u64 unused7;
/*0x0af8*/ u64 unused8;
/*0x0b00*/ u64 unused9;
/*0x0b08*/ u64 unused10;
/*0x0b10*/ u32 unused11;
/*0x0b14*/ u32 xgmac_tx_permitted_frms;
/*0x0b18*/ u32 unused12;
/*0x0b1c*/ u8 unused13;
/*0x0b1d*/ u8 xgmac_port2_tx_any_frms;
/*0x0b1e*/ u8 xgmac_port1_tx_any_frms;
/*0x0b1f*/ u8 xgmac_port0_tx_any_frms;
/*0x0b20*/ u32 unused14;
/*0x0b24*/ u8 unused15;
/*0x0b25*/ u8 xgmac_port2_rx_any_frms;
/*0x0b26*/ u8 xgmac_port1_rx_any_frms;
/*0x0b27*/ u8 xgmac_port0_rx_any_frms;
} __packed;
/**
* struct vxge_hw_device_stats_hw_info - Titan hardware statistics.
* @vpath_info: VPath statistics
* @vpath_info_sav: Vpath statistics saved
*
* Titan hardware statistics.
*/
struct vxge_hw_device_stats_hw_info {
struct vxge_hw_vpath_stats_hw_info
*vpath_info[VXGE_HW_MAX_VIRTUAL_PATHS];
struct vxge_hw_vpath_stats_hw_info
vpath_info_sav[VXGE_HW_MAX_VIRTUAL_PATHS];
};
/**
* struct vxge_hw_vpath_stats_sw_common_info - HW common
* statistics for queues.
* @full_cnt: Number of times the queue was full
* @usage_cnt: usage count.
* @usage_max: Maximum usage
* @reserve_free_swaps_cnt: Reserve/free swap counter. Internal usage.
* @total_compl_cnt: Total descriptor completion count.
*
* Hw queue counters
* See also: struct vxge_hw_vpath_stats_sw_fifo_info{},
* struct vxge_hw_vpath_stats_sw_ring_info{},
*/
struct vxge_hw_vpath_stats_sw_common_info {
u32 full_cnt;
u32 usage_cnt;
u32 usage_max;
u32 reserve_free_swaps_cnt;
u32 total_compl_cnt;
};
/**
* struct vxge_hw_vpath_stats_sw_fifo_info - HW fifo statistics
* @common_stats: Common counters for all queues
* @total_posts: Total number of postings on the queue.
* @total_buffers: Total number of buffers posted.
* @txd_t_code_err_cnt: Array of transmit transfer codes. The position
* (index) in this array reflects the transfer code type, for instance
* 0xA - "loss of link".
* Value txd_t_code_err_cnt[i] reflects the
* number of times the corresponding transfer code was encountered.
*
* HW fifo counters
* See also: struct vxge_hw_vpath_stats_sw_common_info{},
* struct vxge_hw_vpath_stats_sw_ring_info{},
*/
struct vxge_hw_vpath_stats_sw_fifo_info {
struct vxge_hw_vpath_stats_sw_common_info common_stats;
u32 total_posts;
u32 total_buffers;
u32 txd_t_code_err_cnt[VXGE_HW_DTR_MAX_T_CODE];
};
/**
* struct vxge_hw_vpath_stats_sw_ring_info - HW ring statistics
* @common_stats: Common counters for all queues
* @rxd_t_code_err_cnt: Array of receive transfer codes. The position
* (index) in this array reflects the transfer code type,
* for instance
* 0x7 - for "invalid receive buffer size", or 0x8 - for ECC.
* Value rxd_t_code_err_cnt[i] reflects the
* number of times the corresponding transfer code was encountered.
*
* HW ring counters
* See also: struct vxge_hw_vpath_stats_sw_common_info{},
* struct vxge_hw_vpath_stats_sw_fifo_info{},
*/
struct vxge_hw_vpath_stats_sw_ring_info {
struct vxge_hw_vpath_stats_sw_common_info common_stats;
u32 rxd_t_code_err_cnt[VXGE_HW_DTR_MAX_T_CODE];
};
/**
* struct vxge_hw_vpath_stats_sw_err - HW vpath error statistics
* @unknown_alarms:
* @network_sustained_fault:
* @network_sustained_ok:
* @kdfcctl_fifo0_overwrite:
* @kdfcctl_fifo0_poison:
* @kdfcctl_fifo0_dma_error:
* @dblgen_fifo0_overflow:
* @statsb_pif_chain_error:
* @statsb_drop_timeout:
* @target_illegal_access:
* @ini_serr_det:
* @prc_ring_bumps:
* @prc_rxdcm_sc_err:
* @prc_rxdcm_sc_abort:
* @prc_quanta_size_err:
*
* HW vpath error statistics
*/
struct vxge_hw_vpath_stats_sw_err {
u32 unknown_alarms;
u32 network_sustained_fault;
u32 network_sustained_ok;
u32 kdfcctl_fifo0_overwrite;
u32 kdfcctl_fifo0_poison;
u32 kdfcctl_fifo0_dma_error;
u32 dblgen_fifo0_overflow;
u32 statsb_pif_chain_error;
u32 statsb_drop_timeout;
u32 target_illegal_access;
u32 ini_serr_det;
u32 prc_ring_bumps;
u32 prc_rxdcm_sc_err;
u32 prc_rxdcm_sc_abort;
u32 prc_quanta_size_err;
};
/**
* struct vxge_hw_vpath_stats_sw_info - HW vpath sw statistics
* @soft_reset_cnt: Number of times soft reset is done on this vpath.
* @error_stats: error counters for the vpath
* @ring_stats: counters for ring belonging to the vpath
* @fifo_stats: counters for fifo belonging to the vpath
*
* HW vpath sw statistics
* See also: struct vxge_hw_device_info{} }.
*/
struct vxge_hw_vpath_stats_sw_info {
u32 soft_reset_cnt;
struct vxge_hw_vpath_stats_sw_err error_stats;
struct vxge_hw_vpath_stats_sw_ring_info ring_stats;
struct vxge_hw_vpath_stats_sw_fifo_info fifo_stats;
};
/**
* struct vxge_hw_device_stats_sw_info - HW own per-device statistics.
*
* @not_traffic_intr_cnt: Number of times the host was interrupted
* without new completions.
* "Non-traffic interrupt counter".
* @traffic_intr_cnt: Number of traffic interrupts for the device.
* @total_intr_cnt: Total number of traffic interrupts for the device.
* @total_intr_cnt == @traffic_intr_cnt +
* @not_traffic_intr_cnt
* @soft_reset_cnt: Number of times soft reset is done on this device.
* @vpath_info: please see struct vxge_hw_vpath_stats_sw_info{}
* HW per-device statistics.
*/
struct vxge_hw_device_stats_sw_info {
u32 not_traffic_intr_cnt;
u32 traffic_intr_cnt;
u32 total_intr_cnt;
u32 soft_reset_cnt;
struct vxge_hw_vpath_stats_sw_info
vpath_info[VXGE_HW_MAX_VIRTUAL_PATHS];
};
/**
* struct vxge_hw_device_stats_sw_err - HW device error statistics.
* @vpath_alarms: Number of vpath alarms
*
* HW Device error stats
*/
struct vxge_hw_device_stats_sw_err {
u32 vpath_alarms;
};
/**
* struct vxge_hw_device_stats - Contains HW per-device statistics,
* including hw.
* @devh: HW device handle.
* @dma_addr: DMA address of the %hw_info. Given to device to fill-in the stats.
* @hw_info_dmah: DMA handle used to map hw statistics onto the device memory
* space.
* @hw_info_dma_acch: One more DMA handle used subsequently to free the
* DMA object. Note that this and the previous handle have
* physical meaning for Solaris; on Windows and Linux the
* corresponding value will be simply pointer to PCI device.
*
* @hw_dev_info_stats: Titan statistics maintained by the hardware.
* @sw_dev_info_stats: HW's "soft" device informational statistics, e.g. number
* of completions per interrupt.
* @sw_dev_err_stats: HW's "soft" device error statistics.
*
* Structure-container of HW per-device statistics. Note that per-channel
* statistics are kept in separate structures under HW's fifo and ring
* channels.
*/
struct vxge_hw_device_stats {
/* handles */
struct __vxge_hw_device *devh;
/* HW device hardware statistics */
struct vxge_hw_device_stats_hw_info hw_dev_info_stats;
/* HW device "soft" stats */
struct vxge_hw_device_stats_sw_err sw_dev_err_stats;
struct vxge_hw_device_stats_sw_info sw_dev_info_stats;
};
enum vxge_hw_status vxge_hw_device_hw_stats_enable(
struct __vxge_hw_device *devh);
enum vxge_hw_status vxge_hw_device_stats_get(
struct __vxge_hw_device *devh,
struct vxge_hw_device_stats_hw_info *hw_stats);
enum vxge_hw_status vxge_hw_driver_stats_get(
struct __vxge_hw_device *devh,
struct vxge_hw_device_stats_sw_info *sw_stats);
enum vxge_hw_status vxge_hw_mrpcim_stats_enable(struct __vxge_hw_device *devh);
enum vxge_hw_status vxge_hw_mrpcim_stats_disable(struct __vxge_hw_device *devh);
enum vxge_hw_status
vxge_hw_mrpcim_stats_access(
struct __vxge_hw_device *devh,
u32 operation,
u32 location,
u32 offset,
u64 *stat);
enum vxge_hw_status
vxge_hw_device_xmac_stats_get(struct __vxge_hw_device *devh,
struct vxge_hw_xmac_stats *xmac_stats);
/**
* enum enum vxge_hw_mgmt_reg_type - Register types.
*
* @vxge_hw_mgmt_reg_type_legacy: Legacy registers
* @vxge_hw_mgmt_reg_type_toc: TOC Registers
* @vxge_hw_mgmt_reg_type_common: Common Registers
* @vxge_hw_mgmt_reg_type_mrpcim: mrpcim registers
* @vxge_hw_mgmt_reg_type_srpcim: srpcim registers
* @vxge_hw_mgmt_reg_type_vpmgmt: vpath management registers
* @vxge_hw_mgmt_reg_type_vpath: vpath registers
*
* Register type enumaration
*/
enum vxge_hw_mgmt_reg_type {
vxge_hw_mgmt_reg_type_legacy = 0,
vxge_hw_mgmt_reg_type_toc = 1,
vxge_hw_mgmt_reg_type_common = 2,
vxge_hw_mgmt_reg_type_mrpcim = 3,
vxge_hw_mgmt_reg_type_srpcim = 4,
vxge_hw_mgmt_reg_type_vpmgmt = 5,
vxge_hw_mgmt_reg_type_vpath = 6
};
enum vxge_hw_status
vxge_hw_mgmt_reg_read(struct __vxge_hw_device *devh,
enum vxge_hw_mgmt_reg_type type,
u32 index,
u32 offset,
u64 *value);
enum vxge_hw_status
vxge_hw_mgmt_reg_write(struct __vxge_hw_device *devh,
enum vxge_hw_mgmt_reg_type type,
u32 index,
u32 offset,
u64 value);
/**
* enum enum vxge_hw_rxd_state - Descriptor (RXD) state.
* @VXGE_HW_RXD_STATE_NONE: Invalid state.
* @VXGE_HW_RXD_STATE_AVAIL: Descriptor is available for reservation.
* @VXGE_HW_RXD_STATE_POSTED: Descriptor is posted for processing by the
* device.
* @VXGE_HW_RXD_STATE_FREED: Descriptor is free and can be reused for
* filling-in and posting later.
*
* Titan/HW descriptor states.
*
*/
enum vxge_hw_rxd_state {
VXGE_HW_RXD_STATE_NONE = 0,
VXGE_HW_RXD_STATE_AVAIL = 1,
VXGE_HW_RXD_STATE_POSTED = 2,
VXGE_HW_RXD_STATE_FREED = 3
};
/**
* struct vxge_hw_ring_rxd_info - Extended information associated with a
* completed ring descriptor.
* @syn_flag: SYN flag
* @is_icmp: Is ICMP
* @fast_path_eligible: Fast Path Eligible flag
* @l3_cksum: in L3 checksum is valid
* @l3_cksum: Result of IP checksum check (by Titan hardware).
* This field containing VXGE_HW_L3_CKSUM_OK would mean that
* the checksum is correct, otherwise - the datagram is
* corrupted.
* @l4_cksum: in L4 checksum is valid
* @l4_cksum: Result of TCP/UDP checksum check (by Titan hardware).
* This field containing VXGE_HW_L4_CKSUM_OK would mean that
* the checksum is correct. Otherwise - the packet is
* corrupted.
* @frame: Zero or more of enum vxge_hw_frame_type flags.
* See enum vxge_hw_frame_type{}.
* @proto: zero or more of enum vxge_hw_frame_proto flags. Reporting bits for
* various higher-layer protocols, including (but note restricted to)
* TCP and UDP. See enum vxge_hw_frame_proto{}.
* @is_vlan: If vlan tag is valid
* @vlan: VLAN tag extracted from the received frame.
* @rth_bucket: RTH bucket
* @rth_it_hit: Set, If RTH hash value calculated by the Titan hardware
* has a matching entry in the Indirection table.
* @rth_spdm_hit: Set, If RTH hash value calculated by the Titan hardware
* has a matching entry in the Socket Pair Direct Match table.
* @rth_hash_type: RTH hash code of the function used to calculate the hash.
* @rth_value: Receive Traffic Hashing(RTH) hash value. Produced by Titan
* hardware if RTH is enabled.
*/
struct vxge_hw_ring_rxd_info {
u32 syn_flag;
u32 is_icmp;
u32 fast_path_eligible;
u32 l3_cksum_valid;
u32 l3_cksum;
u32 l4_cksum_valid;
u32 l4_cksum;
u32 frame;
u32 proto;
u32 is_vlan;
u32 vlan;
u32 rth_bucket;
u32 rth_it_hit;
u32 rth_spdm_hit;
u32 rth_hash_type;
u32 rth_value;
};
/**
* enum vxge_hw_ring_tcode - Transfer codes returned by adapter
* @VXGE_HW_RING_T_CODE_OK: Transfer ok.
* @VXGE_HW_RING_T_CODE_L3_CKSUM_MISMATCH: Layer 3 checksum presentation
* configuration mismatch.
* @VXGE_HW_RING_T_CODE_L4_CKSUM_MISMATCH: Layer 4 checksum presentation
* configuration mismatch.
* @VXGE_HW_RING_T_CODE_L3_L4_CKSUM_MISMATCH: Layer 3 and Layer 4 checksum
* presentation configuration mismatch.
* @VXGE_HW_RING_T_CODE_L3_PKT_ERR: Layer 3 error unparseable packet,
* such as unknown IPv6 header.
* @VXGE_HW_RING_T_CODE_L2_FRM_ERR: Layer 2 error frame integrity
* error, such as FCS or ECC).
* @VXGE_HW_RING_T_CODE_BUF_SIZE_ERR: Buffer size error the RxD buffer(
* s) were not appropriately sized and data loss occurred.
* @VXGE_HW_RING_T_CODE_INT_ECC_ERR: Internal ECC error RxD corrupted.
* @VXGE_HW_RING_T_CODE_BENIGN_OVFLOW: Benign overflow the contents of
* Segment1 exceeded the capacity of Buffer1 and the remainder
* was placed in Buffer2. Segment2 now starts in Buffer3.
* No data loss or errors occurred.
* @VXGE_HW_RING_T_CODE_ZERO_LEN_BUFF: Buffer size 0 one of the RxDs
* assigned buffers has a size of 0 bytes.
* @VXGE_HW_RING_T_CODE_FRM_DROP: Frame dropped either due to
* VPath Reset or because of a VPIN mismatch.
* @VXGE_HW_RING_T_CODE_UNUSED: Unused
* @VXGE_HW_RING_T_CODE_MULTI_ERR: Multiple errors more than one
* transfer code condition occurred.
*
* Transfer codes returned by adapter.
*/
enum vxge_hw_ring_tcode {
VXGE_HW_RING_T_CODE_OK = 0x0,
VXGE_HW_RING_T_CODE_L3_CKSUM_MISMATCH = 0x1,
VXGE_HW_RING_T_CODE_L4_CKSUM_MISMATCH = 0x2,
VXGE_HW_RING_T_CODE_L3_L4_CKSUM_MISMATCH = 0x3,
VXGE_HW_RING_T_CODE_L3_PKT_ERR = 0x5,
VXGE_HW_RING_T_CODE_L2_FRM_ERR = 0x6,
VXGE_HW_RING_T_CODE_BUF_SIZE_ERR = 0x7,
VXGE_HW_RING_T_CODE_INT_ECC_ERR = 0x8,
VXGE_HW_RING_T_CODE_BENIGN_OVFLOW = 0x9,
VXGE_HW_RING_T_CODE_ZERO_LEN_BUFF = 0xA,
VXGE_HW_RING_T_CODE_FRM_DROP = 0xC,
VXGE_HW_RING_T_CODE_UNUSED = 0xE,
VXGE_HW_RING_T_CODE_MULTI_ERR = 0xF
};
enum vxge_hw_status vxge_hw_ring_rxd_reserve(
struct __vxge_hw_ring *ring_handle,
void **rxdh);
void
vxge_hw_ring_rxd_pre_post(
struct __vxge_hw_ring *ring_handle,
void *rxdh);
void
vxge_hw_ring_rxd_post_post(
struct __vxge_hw_ring *ring_handle,
void *rxdh);
void
vxge_hw_ring_rxd_post_post_wmb(
struct __vxge_hw_ring *ring_handle,
void *rxdh);
void vxge_hw_ring_rxd_post(
struct __vxge_hw_ring *ring_handle,
void *rxdh);
enum vxge_hw_status vxge_hw_ring_rxd_next_completed(
struct __vxge_hw_ring *ring_handle,
void **rxdh,
u8 *t_code);
enum vxge_hw_status vxge_hw_ring_handle_tcode(
struct __vxge_hw_ring *ring_handle,
void *rxdh,
u8 t_code);
void vxge_hw_ring_rxd_free(
struct __vxge_hw_ring *ring_handle,
void *rxdh);
/**
* enum enum vxge_hw_frame_proto - Higher-layer ethernet protocols.
* @VXGE_HW_FRAME_PROTO_VLAN_TAGGED: VLAN.
* @VXGE_HW_FRAME_PROTO_IPV4: IPv4.
* @VXGE_HW_FRAME_PROTO_IPV6: IPv6.
* @VXGE_HW_FRAME_PROTO_IP_FRAG: IP fragmented.
* @VXGE_HW_FRAME_PROTO_TCP: TCP.
* @VXGE_HW_FRAME_PROTO_UDP: UDP.
* @VXGE_HW_FRAME_PROTO_TCP_OR_UDP: TCP or UDP.
*
* Higher layer ethernet protocols and options.
*/
enum vxge_hw_frame_proto {
VXGE_HW_FRAME_PROTO_VLAN_TAGGED = 0x80,
VXGE_HW_FRAME_PROTO_IPV4 = 0x10,
VXGE_HW_FRAME_PROTO_IPV6 = 0x08,
VXGE_HW_FRAME_PROTO_IP_FRAG = 0x04,
VXGE_HW_FRAME_PROTO_TCP = 0x02,
VXGE_HW_FRAME_PROTO_UDP = 0x01,
VXGE_HW_FRAME_PROTO_TCP_OR_UDP = (VXGE_HW_FRAME_PROTO_TCP | \
VXGE_HW_FRAME_PROTO_UDP)
};
/**
* enum enum vxge_hw_fifo_gather_code - Gather codes used in fifo TxD
* @VXGE_HW_FIFO_GATHER_CODE_FIRST: First TxDL
* @VXGE_HW_FIFO_GATHER_CODE_MIDDLE: Middle TxDL
* @VXGE_HW_FIFO_GATHER_CODE_LAST: Last TxDL
* @VXGE_HW_FIFO_GATHER_CODE_FIRST_LAST: First and Last TxDL.
*
* These gather codes are used to indicate the position of a TxD in a TxD list
*/
enum vxge_hw_fifo_gather_code {
VXGE_HW_FIFO_GATHER_CODE_FIRST = 0x2,
VXGE_HW_FIFO_GATHER_CODE_MIDDLE = 0x0,
VXGE_HW_FIFO_GATHER_CODE_LAST = 0x1,
VXGE_HW_FIFO_GATHER_CODE_FIRST_LAST = 0x3
};
/**
* enum enum vxge_hw_fifo_tcode - tcodes used in fifo
* @VXGE_HW_FIFO_T_CODE_OK: Transfer OK
* @VXGE_HW_FIFO_T_CODE_PCI_READ_CORRUPT: PCI read transaction (either TxD or
* frame data) returned with corrupt data.
* @VXGE_HW_FIFO_T_CODE_PCI_READ_FAIL:PCI read transaction was returned
* with no data.
* @VXGE_HW_FIFO_T_CODE_INVALID_MSS: The host attempted to send either a
* frame or LSO MSS that was too long (>9800B).
* @VXGE_HW_FIFO_T_CODE_LSO_ERROR: Error detected during TCP/UDP Large Send
* Offload operation, due to improper header template,
* unsupported protocol, etc.
* @VXGE_HW_FIFO_T_CODE_UNUSED: Unused
* @VXGE_HW_FIFO_T_CODE_MULTI_ERROR: Set to 1 by the adapter if multiple
* data buffer transfer errors are encountered (see below).
* Otherwise it is set to 0.
*
* These tcodes are returned in various API for TxD status
*/
enum vxge_hw_fifo_tcode {
VXGE_HW_FIFO_T_CODE_OK = 0x0,
VXGE_HW_FIFO_T_CODE_PCI_READ_CORRUPT = 0x1,
VXGE_HW_FIFO_T_CODE_PCI_READ_FAIL = 0x2,
VXGE_HW_FIFO_T_CODE_INVALID_MSS = 0x3,
VXGE_HW_FIFO_T_CODE_LSO_ERROR = 0x4,
VXGE_HW_FIFO_T_CODE_UNUSED = 0x7,
VXGE_HW_FIFO_T_CODE_MULTI_ERROR = 0x8
};
enum vxge_hw_status vxge_hw_fifo_txdl_reserve(
struct __vxge_hw_fifo *fifoh,
void **txdlh,
void **txdl_priv);
void vxge_hw_fifo_txdl_buffer_set(
struct __vxge_hw_fifo *fifo_handle,
void *txdlh,
u32 frag_idx,
dma_addr_t dma_pointer,
u32 size);
void vxge_hw_fifo_txdl_post(
struct __vxge_hw_fifo *fifo_handle,
void *txdlh);
u32 vxge_hw_fifo_free_txdl_count_get(
struct __vxge_hw_fifo *fifo_handle);
enum vxge_hw_status vxge_hw_fifo_txdl_next_completed(
struct __vxge_hw_fifo *fifoh,
void **txdlh,
enum vxge_hw_fifo_tcode *t_code);
enum vxge_hw_status vxge_hw_fifo_handle_tcode(
struct __vxge_hw_fifo *fifoh,
void *txdlh,
enum vxge_hw_fifo_tcode t_code);
void vxge_hw_fifo_txdl_free(
struct __vxge_hw_fifo *fifoh,
void *txdlh);
/*
* Device
*/
#define VXGE_HW_RING_NEXT_BLOCK_POINTER_OFFSET (VXGE_HW_BLOCK_SIZE-8)
#define VXGE_HW_RING_MEMBLOCK_IDX_OFFSET (VXGE_HW_BLOCK_SIZE-16)
/*
* struct __vxge_hw_ring_rxd_priv - Receive descriptor HW-private data.
* @dma_addr: DMA (mapped) address of _this_ descriptor.
* @dma_handle: DMA handle used to map the descriptor onto device.
* @dma_offset: Descriptor's offset in the memory block. HW allocates
* descriptors in memory blocks of %VXGE_HW_BLOCK_SIZE
* bytes. Each memblock is contiguous DMA-able memory. Each
* memblock contains 1 or more 4KB RxD blocks visible to the
* Titan hardware.
* @dma_object: DMA address and handle of the memory block that contains
* the descriptor. This member is used only in the "checked"
* version of the HW (to enforce certain assertions);
* otherwise it gets compiled out.
* @allocated: True if the descriptor is reserved, 0 otherwise. Internal usage.
*
* Per-receive decsriptor HW-private data. HW uses the space to keep DMA
* information associated with the descriptor. Note that driver can ask HW
* to allocate additional per-descriptor space for its own (driver-specific)
* purposes.
*/
struct __vxge_hw_ring_rxd_priv {
dma_addr_t dma_addr;
struct pci_dev *dma_handle;
ptrdiff_t dma_offset;
#ifdef VXGE_DEBUG_ASSERT
struct vxge_hw_mempool_dma *dma_object;
#endif
};
struct vxge_hw_mempool_cbs {
void (*item_func_alloc)(
struct vxge_hw_mempool *mempoolh,
u32 memblock_index,
struct vxge_hw_mempool_dma *dma_object,
u32 index,
u32 is_last);
};
#define VXGE_HW_VIRTUAL_PATH_HANDLE(vpath) \
((struct __vxge_hw_vpath_handle *)(vpath)->vpath_handles.next)
enum vxge_hw_status
__vxge_hw_vpath_rts_table_get(
struct __vxge_hw_vpath_handle *vpath_handle,
u32 action,
u32 rts_table,
u32 offset,
u64 *data1,
u64 *data2);
enum vxge_hw_status
__vxge_hw_vpath_rts_table_set(
struct __vxge_hw_vpath_handle *vpath_handle,
u32 action,
u32 rts_table,
u32 offset,
u64 data1,
u64 data2);
enum vxge_hw_status
__vxge_hw_vpath_enable(
struct __vxge_hw_device *devh,
u32 vp_id);
void vxge_hw_device_intr_enable(
struct __vxge_hw_device *devh);
u32 vxge_hw_device_set_intr_type(struct __vxge_hw_device *devh, u32 intr_mode);
void vxge_hw_device_intr_disable(
struct __vxge_hw_device *devh);
void vxge_hw_device_mask_all(
struct __vxge_hw_device *devh);
void vxge_hw_device_unmask_all(
struct __vxge_hw_device *devh);
enum vxge_hw_status vxge_hw_device_begin_irq(
struct __vxge_hw_device *devh,
u32 skip_alarms,
u64 *reason);
void vxge_hw_device_clear_tx_rx(
struct __vxge_hw_device *devh);
/*
* Virtual Paths
*/
void vxge_hw_vpath_dynamic_rti_rtimer_set(struct __vxge_hw_ring *ring);
void vxge_hw_vpath_dynamic_tti_rtimer_set(struct __vxge_hw_fifo *fifo);
u32 vxge_hw_vpath_id(
struct __vxge_hw_vpath_handle *vpath_handle);
enum vxge_hw_vpath_mac_addr_add_mode {
VXGE_HW_VPATH_MAC_ADDR_ADD_DUPLICATE = 0,
VXGE_HW_VPATH_MAC_ADDR_DISCARD_DUPLICATE = 1,
VXGE_HW_VPATH_MAC_ADDR_REPLACE_DUPLICATE = 2
};
enum vxge_hw_status
vxge_hw_vpath_mac_addr_add(
struct __vxge_hw_vpath_handle *vpath_handle,
u8 *macaddr,
u8 *macaddr_mask,
enum vxge_hw_vpath_mac_addr_add_mode duplicate_mode);
enum vxge_hw_status
vxge_hw_vpath_mac_addr_get(
struct __vxge_hw_vpath_handle *vpath_handle,
u8 *macaddr,
u8 *macaddr_mask);
enum vxge_hw_status
vxge_hw_vpath_mac_addr_get_next(
struct __vxge_hw_vpath_handle *vpath_handle,
u8 *macaddr,
u8 *macaddr_mask);
enum vxge_hw_status
vxge_hw_vpath_mac_addr_delete(
struct __vxge_hw_vpath_handle *vpath_handle,
u8 *macaddr,
u8 *macaddr_mask);
enum vxge_hw_status
vxge_hw_vpath_vid_add(
struct __vxge_hw_vpath_handle *vpath_handle,
u64 vid);
enum vxge_hw_status
vxge_hw_vpath_vid_delete(
struct __vxge_hw_vpath_handle *vpath_handle,
u64 vid);
enum vxge_hw_status
vxge_hw_vpath_etype_add(
struct __vxge_hw_vpath_handle *vpath_handle,
u64 etype);
enum vxge_hw_status
vxge_hw_vpath_etype_get(
struct __vxge_hw_vpath_handle *vpath_handle,
u64 *etype);
enum vxge_hw_status
vxge_hw_vpath_etype_get_next(
struct __vxge_hw_vpath_handle *vpath_handle,
u64 *etype);
enum vxge_hw_status
vxge_hw_vpath_etype_delete(
struct __vxge_hw_vpath_handle *vpath_handle,
u64 etype);
enum vxge_hw_status vxge_hw_vpath_promisc_enable(
struct __vxge_hw_vpath_handle *vpath_handle);
enum vxge_hw_status vxge_hw_vpath_promisc_disable(
struct __vxge_hw_vpath_handle *vpath_handle);
enum vxge_hw_status vxge_hw_vpath_bcast_enable(
struct __vxge_hw_vpath_handle *vpath_handle);
enum vxge_hw_status vxge_hw_vpath_mcast_enable(
struct __vxge_hw_vpath_handle *vpath_handle);
enum vxge_hw_status vxge_hw_vpath_mcast_disable(
struct __vxge_hw_vpath_handle *vpath_handle);
enum vxge_hw_status vxge_hw_vpath_poll_rx(
struct __vxge_hw_ring *ringh);
enum vxge_hw_status vxge_hw_vpath_poll_tx(
struct __vxge_hw_fifo *fifoh,
struct sk_buff ***skb_ptr, int nr_skb, int *more);
enum vxge_hw_status vxge_hw_vpath_alarm_process(
struct __vxge_hw_vpath_handle *vpath_handle,
u32 skip_alarms);
void
vxge_hw_vpath_msix_set(struct __vxge_hw_vpath_handle *vpath_handle,
int *tim_msix_id, int alarm_msix_id);
void
vxge_hw_vpath_msix_mask(struct __vxge_hw_vpath_handle *vpath_handle,
int msix_id);
void vxge_hw_vpath_msix_clear(struct __vxge_hw_vpath_handle *vp, int msix_id);
void vxge_hw_device_flush_io(struct __vxge_hw_device *devh);
void
vxge_hw_vpath_msix_unmask(struct __vxge_hw_vpath_handle *vpath_handle,
int msix_id);
enum vxge_hw_status vxge_hw_vpath_intr_enable(
struct __vxge_hw_vpath_handle *vpath_handle);
enum vxge_hw_status vxge_hw_vpath_intr_disable(
struct __vxge_hw_vpath_handle *vpath_handle);
void vxge_hw_vpath_inta_mask_tx_rx(
struct __vxge_hw_vpath_handle *vpath_handle);
void vxge_hw_vpath_inta_unmask_tx_rx(
struct __vxge_hw_vpath_handle *vpath_handle);
void
vxge_hw_channel_msix_mask(struct __vxge_hw_channel *channelh, int msix_id);
void
vxge_hw_channel_msix_unmask(struct __vxge_hw_channel *channelh, int msix_id);
void
vxge_hw_channel_msix_clear(struct __vxge_hw_channel *channelh, int msix_id);
void
vxge_hw_channel_dtr_try_complete(struct __vxge_hw_channel *channel,
void **dtrh);
void
vxge_hw_channel_dtr_complete(struct __vxge_hw_channel *channel);
void
vxge_hw_channel_dtr_free(struct __vxge_hw_channel *channel, void *dtrh);
int
vxge_hw_channel_dtr_count(struct __vxge_hw_channel *channel);
void vxge_hw_vpath_tti_ci_set(struct __vxge_hw_fifo *fifo);
void vxge_hw_vpath_dynamic_rti_ci_set(struct __vxge_hw_ring *ring);
#endif
/******************************************************************************
* This software may be used and distributed according to the terms of
* the GNU General Public License (GPL), incorporated herein by reference.
* Drivers based on or derived from this code fall under the GPL and must
* retain the authorship, copyright and license notice. This file is not
* a complete program and may only be used when the entire operating
* system is licensed under the GPL.
* See the file COPYING in this distribution for more information.
*
* vxge-version.h: Driver for Exar Corp's X3100 Series 10GbE PCIe I/O
* Virtualized Server Adapter.
* Copyright(c) 2002-2010 Exar Corp.
******************************************************************************/
#ifndef VXGE_VERSION_H
#define VXGE_VERSION_H
#define VXGE_VERSION_MAJOR "2"
#define VXGE_VERSION_MINOR "5"
#define VXGE_VERSION_FIX "3"
#define VXGE_VERSION_BUILD "22640"
#define VXGE_VERSION_FOR "k"
#define VXGE_FW_VER(maj, min, bld) (((maj) << 16) + ((min) << 8) + (bld))
#define VXGE_DEAD_FW_VER_MAJOR 1
#define VXGE_DEAD_FW_VER_MINOR 4
#define VXGE_DEAD_FW_VER_BUILD 4
#define VXGE_FW_DEAD_VER VXGE_FW_VER(VXGE_DEAD_FW_VER_MAJOR, \
VXGE_DEAD_FW_VER_MINOR, \
VXGE_DEAD_FW_VER_BUILD)
#define VXGE_EPROM_FW_VER_MAJOR 1
#define VXGE_EPROM_FW_VER_MINOR 6
#define VXGE_EPROM_FW_VER_BUILD 1
#define VXGE_EPROM_FW_VER VXGE_FW_VER(VXGE_EPROM_FW_VER_MAJOR, \
VXGE_EPROM_FW_VER_MINOR, \
VXGE_EPROM_FW_VER_BUILD)
#define VXGE_CERT_FW_VER_MAJOR 1
#define VXGE_CERT_FW_VER_MINOR 8
#define VXGE_CERT_FW_VER_BUILD 1
#define VXGE_CERT_FW_VER VXGE_FW_VER(VXGE_CERT_FW_VER_MAJOR, \
VXGE_CERT_FW_VER_MINOR, \
VXGE_CERT_FW_VER_BUILD)
#endif
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment