Commit 41627cdb authored by David S. Miller's avatar David S. Miller

Merge tag 'linux-can-next-for-4.19-20180727' of...

Merge tag 'linux-can-next-for-4.19-20180727' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next

Marc Kleine-Budde says:

====================
pull-request: can-next 2018-01-16

this is a pull request for net-next/master consisting of 38 patches.

Dan Murphy's patch fixes the path to a file in the comment of the CAN
Error Message Frame Mask structure.

A patch by Colin Ian King fixes a typo in the cc770 driver.

The next patch is by me an sorts the Kconfigand Makefile entries of the
CAN-USB driver subdir alphabetically.

The patch by Jakob Unterwurzacher adds support for the UCAN USB-CAN
adapter.

YueHaibing's patch replaces a open coded skb_put()+memset() by
skb_put_zero() in the CAN-dev infrastructure.

Zhu Yi provides a patch to enable multi-queue CAN devices.

Three patches by Luc Van Oostenryck fix the return value of several
driver's xmit function, I contribute a patch for the a fourth driver.

Fabio Estevam's patch switches the flexcan driver to SPDX identifier.

Two patches by Jia-Ju Bai replace the mdelay() by a usleep_range() in
the sja1000 drivers.

The next 6 patches are by Anssi Hannula and refactor the xilinx CAN
driver and add support for the xilinx CAN FD core.

A patch by Gustavo A. R. Silva adds fallthrough annotation to the
peak_usb driver.

5 patches by Stephane Grosjean for the peak CANFD driver do some
cleanups and provide more improvements for further firmware releases.

The remaining 13 patches are by Jimmy Assarsson and the first clean up
the kvaser_usb driver, so that the later patches add support for the
Kvaser USB hydra family.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 4be90c79 1f6ed42c
......@@ -2,20 +2,26 @@ Xilinx Axi CAN/Zynq CANPS controller Device Tree Bindings
---------------------------------------------------------
Required properties:
- compatible : Should be "xlnx,zynq-can-1.0" for Zynq CAN
controllers and "xlnx,axi-can-1.00.a" for Axi CAN
controllers.
- reg : Physical base address and size of the Axi CAN/Zynq
CANPS registers map.
- compatible : Should be:
- "xlnx,zynq-can-1.0" for Zynq CAN controllers
- "xlnx,axi-can-1.00.a" for Axi CAN controllers
- "xlnx,canfd-1.0" for CAN FD controllers
- reg : Physical base address and size of the controller
registers map.
- interrupts : Property with a value describing the interrupt
number.
- interrupt-parent : Must be core interrupt controller
- clock-names : List of input clock names - "can_clk", "pclk"
(For CANPS), "can_clk" , "s_axi_aclk"(For AXI CAN)
- clock-names : List of input clock names
- "can_clk", "pclk" (For CANPS),
- "can_clk", "s_axi_aclk" (For AXI CAN and CAN FD).
(See clock bindings for details).
- clocks : Clock phandles (see clock bindings for details).
- tx-fifo-depth : Can Tx fifo depth.
- rx-fifo-depth : Can Rx fifo depth.
- tx-fifo-depth : Can Tx fifo depth (Zynq, Axi CAN).
- rx-fifo-depth : Can Rx fifo depth (Zynq, Axi CAN, CAN FD in
sequential Rx mode).
- tx-mailbox-count : Can Tx mailbox buffer count (CAN FD).
- rx-mailbox-count : Can Rx mailbox buffer count (CAN FD in mailbox Rx
mode).
Example:
......@@ -42,3 +48,14 @@ For Axi CAN Dts file:
tx-fifo-depth = <0x40>;
rx-fifo-depth = <0x40>;
};
For CAN FD Dts file:
canfd_0: canfd@40000000 {
compatible = "xlnx,canfd-1.0";
clocks = <&clkc 0>, <&clkc 1>;
clock-names = "can_clk", "s_axi_aclk";
reg = <0x40000000 0x2000>;
interrupt-parent = <&intc>;
interrupts = <0 59 1>;
tx-mailbox-count = <0x20>;
rx-fifo-depth = <0x20>;
};
=================
The UCAN Protocol
=================
UCAN is the protocol used by the microcontroller-based USB-CAN
adapter that is integrated on System-on-Modules from Theobroma Systems
and that is also available as a standalone USB stick.
The UCAN protocol has been designed to be hardware-independent.
It is modeled closely after how Linux represents CAN devices
internally. All multi-byte integers are encoded as Little Endian.
All structures mentioned in this document are defined in
``drivers/net/can/usb/ucan.c``.
USB Endpoints
=============
UCAN devices use three USB endpoints:
CONTROL endpoint
The driver sends device management commands on this endpoint
IN endpoint
The device sends CAN data frames and CAN error frames
OUT endpoint
The driver sends CAN data frames on the out endpoint
CONTROL Messages
================
UCAN devices are configured using vendor requests on the control pipe.
To support multiple CAN interfaces in a single USB device all
configuration commands target the corresponding interface in the USB
descriptor.
The driver uses ``ucan_ctrl_command_in/out`` and
``ucan_device_request_in`` to deliver commands to the device.
Setup Packet
------------
================= =====================================================
``bmRequestType`` Direction | Vendor | (Interface or Device)
``bRequest`` Command Number
``wValue`` Subcommand Number (16 Bit) or 0 if not used
``wIndex`` USB Interface Index (0 for device commands)
``wLength`` * Host to Device - Number of bytes to transmit
* Device to Host - Maximum Number of bytes to
receive. If the device send less. Commom ZLP
semantics are used.
================= =====================================================
Error Handling
--------------
The device indicates failed control commands by stalling the
pipe.
Device Commands
---------------
UCAN_DEVICE_GET_FW_STRING
~~~~~~~~~~~~~~~~~~~~~~~~~
*Dev2Host; optional*
Request the device firmware string.
Interface Commands
------------------
UCAN_COMMAND_START
~~~~~~~~~~~~~~~~~~
*Host2Dev; mandatory*
Bring the CAN interface up.
Payload Format
``ucan_ctl_payload_t.cmd_start``
==== ============================
mode or mask of ``UCAN_MODE_*``
==== ============================
UCAN_COMMAND_STOP
~~~~~~~~~~~~~~~~~~
*Host2Dev; mandatory*
Stop the CAN interface
Payload Format
*empty*
UCAN_COMMAND_RESET
~~~~~~~~~~~~~~~~~~
*Host2Dev; mandatory*
Reset the CAN controller (including error counters)
Payload Format
*empty*
UCAN_COMMAND_GET
~~~~~~~~~~~~~~~~
*Host2Dev; mandatory*
Get Information from the Device
Subcommands
^^^^^^^^^^^
UCAN_COMMAND_GET_INFO
Request the device information structure ``ucan_ctl_payload_t.device_info``.
See the ``device_info`` field for details, and
``uapi/linux/can/netlink.h`` for an explanation of the
``can_bittiming fields``.
Payload Format
``ucan_ctl_payload_t.device_info``
UCAN_COMMAND_GET_PROTOCOL_VERSION
Request the device protocol version
``ucan_ctl_payload_t.protocol_version``. The current protocol version is 3.
Payload Format
``ucan_ctl_payload_t.protocol_version``
.. note:: Devices that do not implement this command use the old
protocol version 1
UCAN_COMMAND_SET_BITTIMING
~~~~~~~~~~~~~~~~~~~~~~~~~~
*Host2Dev; mandatory*
Setup bittiming by sending the the structure
``ucan_ctl_payload_t.cmd_set_bittiming`` (see ``struct bittiming`` for
details)
Payload Format
``ucan_ctl_payload_t.cmd_set_bittiming``.
UCAN_SLEEP/WAKE
~~~~~~~~~~~~~~~
*Host2Dev; optional*
Configure sleep and wake modes. Not yet supported by the driver.
UCAN_FILTER
~~~~~~~~~~~
*Host2Dev; optional*
Setup hardware CAN filters. Not yet supported by the driver.
Allowed interface commands
--------------------------
================== =================== ==================
Legal Device State Command New Device State
================== =================== ==================
stopped SET_BITTIMING stopped
stopped START started
started STOP or RESET stopped
stopped STOP or RESET stopped
started RESTART started
any GET *no change*
================== =================== ==================
IN Message Format
=================
A data packet on the USB IN endpoint contains one or more
``ucan_message_in`` values. If multiple messages are batched in a USB
data packet, the ``len`` field can be used to jump to the next
``ucan_message_in`` value (take care to sanity-check the ``len`` value
against the actual data size).
.. _can_ucan_in_message_len:
``len`` field
-------------
Each ``ucan_message_in`` must be aligned to a 4-byte boundary (relative
to the start of the start of the data buffer). That means that there
may be padding bytes between multiple ``ucan_message_in`` values:
.. code::
+----------------------------+ < 0
| |
| struct ucan_message_in |
| |
+----------------------------+ < len
[padding]
+----------------------------+ < round_up(len, 4)
| |
| struct ucan_message_in |
| |
+----------------------------+
[...]
``type`` field
--------------
The ``type`` field specifies the type of the message.
UCAN_IN_RX
~~~~~~~~~~
``subtype``
zero
Data received from the CAN bus (ID + payload).
UCAN_IN_TX_COMPLETE
~~~~~~~~~~~~~~~~~~~
``subtype``
zero
The CAN device has sent a message to the CAN bus. It answers with a
list of of tuples <echo-ids, flags>.
The echo-id identifies the frame from (echos the id from a previous
UCAN_OUT_TX message). The flag indicates the result of the
transmission. Whereas a set Bit 0 indicates success. All other bits
are reserved and set to zero.
Flow Control
------------
When receiving CAN messages there is no flow control on the USB
buffer. The driver has to handle inbound message quickly enough to
avoid drops. I case the device buffer overflow the condition is
reported by sending corresponding error frames (see
:ref:`can_ucan_error_handling`)
OUT Message Format
==================
A data packet on the USB OUT endpoint contains one or more ``struct
ucan_message_out`` values. If multiple messages are batched into one
data packet, the device uses the ``len`` field to jump to the next
ucan_message_out value. Each ucan_message_out must be aligned to 4
bytes (relative to the start of the data buffer). The mechanism is
same as described in :ref:`can_ucan_in_message_len`.
.. code::
+----------------------------+ < 0
| |
| struct ucan_message_out |
| |
+----------------------------+ < len
[padding]
+----------------------------+ < round_up(len, 4)
| |
| struct ucan_message_out |
| |
+----------------------------+
[...]
``type`` field
--------------
In protocol version 3 only ``UCAN_OUT_TX`` is defined, others are used
only by legacy devices (protocol version 1).
UCAN_OUT_TX
~~~~~~~~~~~
``subtype``
echo id to be replied within a CAN_IN_TX_COMPLETE message
Transmit a CAN frame. (parameters: ``id``, ``data``)
Flow Control
------------
When the device outbound buffers are full it starts sending *NAKs* on
the *OUT* pipe until more buffers are available. The driver stops the
queue when a certain threshold of out packets are incomplete.
.. _can_ucan_error_handling:
CAN Error Handling
==================
If error reporting is turned on the device encodes errors into CAN
error frames (see ``uapi/linux/can/error.h``) and sends it using the
IN endpoint. The driver updates its error statistics and forwards
it.
Although UCAN devices can suppress error frames completely, in Linux
the driver is always interested. Hence, the device is always started with
the ``UCAN_MODE_BERR_REPORT`` set. Filtering those messages for the
user space is done by the driver.
Bus OFF
-------
- The device does not recover from bus of automatically.
- Bus OFF is indicated by an error frame (see ``uapi/linux/can/error.h``)
- Bus OFF recovery is started by ``UCAN_COMMAND_RESTART``
- Once Bus OFF recover is completed the device sends an error frame
indicating that it is on ERROR-ACTIVE state.
- During Bus OFF no frames are sent by the device.
- During Bus OFF transmission requests from the host are completed
immediately with the success bit left unset.
Example Conversation
====================
#) Device is connected to USB
#) Host sends command ``UCAN_COMMAND_RESET``, subcmd 0
#) Host sends command ``UCAN_COMMAND_GET``, subcmd ``UCAN_COMMAND_GET_INFO``
#) Device sends ``UCAN_IN_DEVICE_INFO``
#) Host sends command ``UCAN_OUT_SET_BITTIMING``
#) Host sends command ``UCAN_COMMAND_START``, subcmd 0, mode ``UCAN_MODE_BERR_REPORT``
......@@ -10,6 +10,7 @@ Contents:
af_xdp
batman-adv
can
can_ucan_protocol
dpaa2/index
e100
e1000
......
......@@ -73,7 +73,7 @@ MODULE_PARM_DESC(msgobj15_eff, "Extended 29-bit frames for message object 15 "
static int i82527_compat;
module_param(i82527_compat, int, 0444);
MODULE_PARM_DESC(i82527_compat, "Strict Intel 82527 comptibility mode "
MODULE_PARM_DESC(i82527_compat, "Strict Intel 82527 compatibility mode "
"without using additional functions");
/*
......
......@@ -649,8 +649,7 @@ struct sk_buff *alloc_can_skb(struct net_device *dev, struct can_frame **cf)
can_skb_prv(skb)->ifindex = dev->ifindex;
can_skb_prv(skb)->skbcnt = 0;
*cf = skb_put(skb, sizeof(struct can_frame));
memset(*cf, 0, sizeof(struct can_frame));
*cf = skb_put_zero(skb, sizeof(struct can_frame));
return skb;
}
......@@ -678,8 +677,7 @@ struct sk_buff *alloc_canfd_skb(struct net_device *dev,
can_skb_prv(skb)->ifindex = dev->ifindex;
can_skb_prv(skb)->skbcnt = 0;
*cfd = skb_put(skb, sizeof(struct canfd_frame));
memset(*cfd, 0, sizeof(struct canfd_frame));
*cfd = skb_put_zero(skb, sizeof(struct canfd_frame));
return skb;
}
......@@ -703,7 +701,8 @@ EXPORT_SYMBOL_GPL(alloc_can_err_skb);
/*
* Allocate and setup space for the CAN network device
*/
struct net_device *alloc_candev(int sizeof_priv, unsigned int echo_skb_max)
struct net_device *alloc_candev_mqs(int sizeof_priv, unsigned int echo_skb_max,
unsigned int txqs, unsigned int rxqs)
{
struct net_device *dev;
struct can_priv *priv;
......@@ -715,7 +714,8 @@ struct net_device *alloc_candev(int sizeof_priv, unsigned int echo_skb_max)
else
size = sizeof_priv;
dev = alloc_netdev(size, "can%d", NET_NAME_UNKNOWN, can_setup);
dev = alloc_netdev_mqs(size, "can%d", NET_NAME_UNKNOWN, can_setup,
txqs, rxqs);
if (!dev)
return NULL;
......@@ -734,7 +734,7 @@ struct net_device *alloc_candev(int sizeof_priv, unsigned int echo_skb_max)
return dev;
}
EXPORT_SYMBOL_GPL(alloc_candev);
EXPORT_SYMBOL_GPL(alloc_candev_mqs);
/*
* Free space of the CAN network device
......
/*
* flexcan.c - FLEXCAN CAN controller driver
*
* Copyright (c) 2005-2006 Varma Electronics Oy
* Copyright (c) 2009 Sascha Hauer, Pengutronix
* Copyright (c) 2010-2017 Pengutronix, Marc Kleine-Budde <kernel@pengutronix.de>
* Copyright (c) 2014 David Jander, Protonic Holland
*
* Based on code originally by Andrey Volkov <avolkov@varma-el.com>
*
* LICENCE:
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation version 2.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
// SPDX-License-Identifier: GPL-2.0
//
// flexcan.c - FLEXCAN CAN controller driver
//
// Copyright (c) 2005-2006 Varma Electronics Oy
// Copyright (c) 2009 Sascha Hauer, Pengutronix
// Copyright (c) 2010-2017 Pengutronix, Marc Kleine-Budde <kernel@pengutronix.de>
// Copyright (c) 2014 David Jander, Protonic Holland
//
// Based on code originally by Andrey Volkov <avolkov@varma-el.com>
#include <linux/netdevice.h>
#include <linux/can.h>
......@@ -523,7 +512,7 @@ static int flexcan_get_berr_counter(const struct net_device *dev,
return err;
}
static int flexcan_start_xmit(struct sk_buff *skb, struct net_device *dev)
static netdev_tx_t flexcan_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
const struct flexcan_priv *priv = netdev_priv(dev);
struct can_frame *cf = (struct can_frame *)skb->data;
......
......@@ -1684,7 +1684,7 @@ static int ican3_stop(struct net_device *ndev)
return 0;
}
static int ican3_xmit(struct sk_buff *skb, struct net_device *ndev)
static netdev_tx_t ican3_xmit(struct sk_buff *skb, struct net_device *ndev)
{
struct ican3_dev *mod = netdev_priv(ndev);
struct can_frame *cf = (struct can_frame *)skb->data;
......
......@@ -486,7 +486,7 @@ int peak_canfd_handle_msgs_list(struct peak_canfd_priv *priv,
if (msg_size <= 0)
break;
msg_ptr += msg_size;
msg_ptr += ALIGN(msg_size, 4);
}
if (msg_size < 0)
......
......@@ -174,9 +174,6 @@ struct pciefd_page {
u32 size;
};
#define CANFD_IRQ_SET 0x00000001
#define CANFD_TX_PATH_SET 0x00000002
/* CAN-FD channel object */
struct pciefd_board;
struct pciefd_can {
......@@ -418,7 +415,7 @@ static int pciefd_pre_cmd(struct peak_canfd_priv *ucan)
break;
/* going into operational mode: setup IRQ handler */
err = request_irq(priv->board->pci_dev->irq,
err = request_irq(priv->ucan.ndev->irq,
pciefd_irq_handler,
IRQF_SHARED,
PCIEFD_DRV_NAME,
......@@ -491,15 +488,18 @@ static int pciefd_post_cmd(struct peak_canfd_priv *ucan)
/* controller now in reset mode: */
/* disable IRQ for this CAN */
pciefd_can_writereg(priv, CANFD_CTL_IEN_BIT,
PCIEFD_REG_CAN_RX_CTL_CLR);
/* stop and reset DMA addresses in Tx/Rx engines */
pciefd_can_clear_tx_dma(priv);
pciefd_can_clear_rx_dma(priv);
/* disable IRQ for this CAN */
pciefd_can_writereg(priv, CANFD_CTL_IEN_BIT,
PCIEFD_REG_CAN_RX_CTL_CLR);
/* wait for above commands to complete (read cycle) */
(void)pciefd_sys_readreg(priv->board, PCIEFD_REG_SYS_VER1);
free_irq(priv->board->pci_dev->irq, priv);
free_irq(priv->ucan.ndev->irq, priv);
ucan->can.state = CAN_STATE_STOPPED;
......@@ -638,7 +638,7 @@ static int pciefd_can_probe(struct pciefd_board *pciefd)
GFP_KERNEL);
if (!priv->tx_dma_vaddr) {
dev_err(&pciefd->pci_dev->dev,
"Tx dmaim_alloc_coherent(%u) failure\n",
"Tx dmam_alloc_coherent(%u) failure\n",
PCIEFD_TX_DMA_SIZE);
goto err_free_candev;
}
......@@ -691,7 +691,7 @@ static int pciefd_can_probe(struct pciefd_board *pciefd)
pciefd->can[pciefd->can_count] = priv;
dev_info(&pciefd->pci_dev->dev, "%s at reg_base=0x%p irq=%d\n",
ndev->name, priv->reg_base, pciefd->pci_dev->irq);
ndev->name, priv->reg_base, ndev->irq);
return 0;
......
......@@ -608,7 +608,7 @@ static int peak_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
writeb(0x00, cfg_base + PITA_GPIOICR);
/* Toggle reset */
writeb(0x05, cfg_base + PITA_MISC + 3);
mdelay(5);
usleep_range(5000, 6000);
/* Leave parport mux mode */
writeb(0x04, cfg_base + PITA_MISC + 3);
......
......@@ -530,7 +530,7 @@ static int pcan_add_channels(struct pcan_pccard *card)
pcan_write_reg(card, PCC_CCR, ccr);
/* wait 2ms before unresetting channels */
mdelay(2);
usleep_range(2000, 3000);
ccr &= ~PCC_CCR_RST_ALL;
pcan_write_reg(card, PCC_CCR, ccr);
......
......@@ -409,7 +409,7 @@ static int sun4ican_set_mode(struct net_device *dev, enum can_mode mode)
* xx xx xx xx ff ll 00 11 22 33 44 55 66 77
* [ can_id ] [flags] [len] [can data (up to 8 bytes]
*/
static int sun4ican_start_xmit(struct sk_buff *skb, struct net_device *dev)
static netdev_tx_t sun4ican_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct sun4ican_priv *priv = netdev_priv(dev);
struct can_frame *cf = (struct can_frame *)skb->data;
......
menu "CAN USB interfaces"
depends on USB
config CAN_8DEV_USB
tristate "8 devices USB2CAN interface"
---help---
This driver supports the USB2CAN interface
from 8 devices (http://www.8devices.com).
config CAN_EMS_USB
tristate "EMS CPC-USB/ARM7 CAN/USB interface"
---help---
......@@ -26,7 +32,7 @@ config CAN_KVASER_USB
tristate "Kvaser CAN/USB interface"
---help---
This driver adds support for Kvaser CAN/USB devices like Kvaser
Leaf Light and Kvaser USBcan II.
Leaf Light, Kvaser USBcan II and Kvaser Memorator Pro 5xHS.
The driver provides support for the following devices:
- Kvaser Leaf Light
......@@ -55,12 +61,30 @@ config CAN_KVASER_USB
- Kvaser Memorator HS/HS
- Kvaser Memorator HS/LS
- Scania VCI2 (if you have the Kvaser logo on top)
- Kvaser BlackBird v2
- Kvaser Leaf Pro HS v2
- Kvaser Hybrid 2xCAN/LIN
- Kvaser Hybrid Pro 2xCAN/LIN
- Kvaser Memorator 2xHS v2
- Kvaser Memorator Pro 2xHS v2
- Kvaser Memorator Pro 5xHS
- Kvaser USBcan Light 4xHS
- Kvaser USBcan Pro 2xHS v2
- Kvaser USBcan Pro 5xHS
- ATI Memorator Pro 2xHS v2
- ATI USBcan Pro 2xHS v2
If unsure, say N.
To compile this driver as a module, choose M here: the
module will be called kvaser_usb.
config CAN_MCBA_USB
tristate "Microchip CAN BUS Analyzer interface"
---help---
This driver supports the CAN BUS Analyzer interface
from Microchip (http://www.microchip.com/development-tools/).
config CAN_PEAK_USB
tristate "PEAK PCAN-USB/USB Pro interfaces for CAN 2.0b/CAN-FD"
---help---
......@@ -77,16 +101,26 @@ config CAN_PEAK_USB
(see also http://www.peak-system.com).
config CAN_8DEV_USB
tristate "8 devices USB2CAN interface"
---help---
This driver supports the USB2CAN interface
from 8 devices (http://www.8devices.com).
config CAN_MCBA_USB
tristate "Microchip CAN BUS Analyzer interface"
---help---
This driver supports the CAN BUS Analyzer interface
from Microchip (http://www.microchip.com/development-tools/).
config CAN_UCAN
tristate "Theobroma Systems UCAN interface"
---help---
This driver supports the Theobroma Systems
UCAN USB-CAN interface.
The UCAN driver supports the microcontroller-based USB/CAN
adapters from Theobroma Systems. There are two form-factors
that run essentially the same firmware:
* Seal: standalone USB stick
https://www.theobroma-systems.com/seal)
* Mule: integrated on the PCB of various System-on-Modules
from Theobroma Systems like the A31-µQ7 and the RK3399-Q7
(https://www.theobroma-systems.com/rk3399-q7)
endmenu
......@@ -3,10 +3,11 @@
# Makefile for the Linux Controller Area Network USB drivers.
#
obj-$(CONFIG_CAN_8DEV_USB) += usb_8dev.o
obj-$(CONFIG_CAN_EMS_USB) += ems_usb.o
obj-$(CONFIG_CAN_ESD_USB2) += esd_usb2.o
obj-$(CONFIG_CAN_GS_USB) += gs_usb.o
obj-$(CONFIG_CAN_KVASER_USB) += kvaser_usb.o
obj-$(CONFIG_CAN_PEAK_USB) += peak_usb/
obj-$(CONFIG_CAN_8DEV_USB) += usb_8dev.o
obj-$(CONFIG_CAN_KVASER_USB) += kvaser_usb/
obj-$(CONFIG_CAN_MCBA_USB) += mcba_usb.o
obj-$(CONFIG_CAN_PEAK_USB) += peak_usb/
obj-$(CONFIG_CAN_UCAN) += ucan.o
obj-$(CONFIG_CAN_KVASER_USB) += kvaser_usb.o
kvaser_usb-y = kvaser_usb_core.o kvaser_usb_leaf.o kvaser_usb_hydra.o
/* SPDX-License-Identifier: GPL-2.0 */
/* Parts of this driver are based on the following:
* - Kvaser linux leaf driver (version 4.78)
* - CAN driver for esd CAN-USB/2
* - Kvaser linux usbcanII driver (version 5.3)
* - Kvaser linux mhydra driver (version 5.24)
*
* Copyright (C) 2002-2018 KVASER AB, Sweden. All rights reserved.
* Copyright (C) 2010 Matthias Fuchs <matthias.fuchs@esd.eu>, esd gmbh
* Copyright (C) 2012 Olivier Sobrie <olivier@sobrie.be>
* Copyright (C) 2015 Valeo S.A.
*/
#ifndef KVASER_USB_H
#define KVASER_USB_H
/* Kvaser USB CAN dongles are divided into three major platforms:
* - Hydra: Running firmware labeled as 'mhydra'
* - Leaf: Based on Renesas M32C or Freescale i.MX28, running firmware labeled
* as 'filo'
* - UsbcanII: Based on Renesas M16C, running firmware labeled as 'helios'
*/
#include <linux/completion.h>
#include <linux/spinlock.h>
#include <linux/types.h>
#include <linux/usb.h>
#include <linux/can.h>
#include <linux/can/dev.h>
#define KVASER_USB_MAX_RX_URBS 4
#define KVASER_USB_MAX_TX_URBS 128
#define KVASER_USB_TIMEOUT 1000 /* msecs */
#define KVASER_USB_RX_BUFFER_SIZE 3072
#define KVASER_USB_MAX_NET_DEVICES 5
/* USB devices features */
#define KVASER_USB_HAS_SILENT_MODE BIT(0)
#define KVASER_USB_HAS_TXRX_ERRORS BIT(1)
/* Device capabilities */
#define KVASER_USB_CAP_BERR_CAP 0x01
#define KVASER_USB_CAP_EXT_CAP 0x02
#define KVASER_USB_HYDRA_CAP_EXT_CMD 0x04
struct kvaser_usb_dev_cfg;
enum kvaser_usb_leaf_family {
KVASER_LEAF,
KVASER_USBCAN,
};
#define KVASER_USB_HYDRA_MAX_CMD_LEN 128
struct kvaser_usb_dev_card_data_hydra {
u8 channel_to_he[KVASER_USB_MAX_NET_DEVICES];
u8 sysdbg_he;
spinlock_t transid_lock; /* lock for transid */
u16 transid;
/* lock for usb_rx_leftover and usb_rx_leftover_len */
spinlock_t usb_rx_leftover_lock;
u8 usb_rx_leftover[KVASER_USB_HYDRA_MAX_CMD_LEN];
u8 usb_rx_leftover_len;
};
struct kvaser_usb_dev_card_data {
u32 ctrlmode_supported;
u32 capabilities;
union {
struct {
enum kvaser_usb_leaf_family family;
} leaf;
struct kvaser_usb_dev_card_data_hydra hydra;
};
};
/* Context for an outstanding, not yet ACKed, transmission */
struct kvaser_usb_tx_urb_context {
struct kvaser_usb_net_priv *priv;
u32 echo_index;
int dlc;
};
struct kvaser_usb {
struct usb_device *udev;
struct usb_interface *intf;
struct kvaser_usb_net_priv *nets[KVASER_USB_MAX_NET_DEVICES];
const struct kvaser_usb_dev_ops *ops;
const struct kvaser_usb_dev_cfg *cfg;
struct usb_endpoint_descriptor *bulk_in, *bulk_out;
struct usb_anchor rx_submitted;
/* @max_tx_urbs: Firmware-reported maximum number of outstanding,
* not yet ACKed, transmissions on this device. This value is
* also used as a sentinel for marking free tx contexts.
*/
u32 fw_version;
unsigned int nchannels;
unsigned int max_tx_urbs;
struct kvaser_usb_dev_card_data card_data;
bool rxinitdone;
void *rxbuf[KVASER_USB_MAX_RX_URBS];
dma_addr_t rxbuf_dma[KVASER_USB_MAX_RX_URBS];
};
struct kvaser_usb_net_priv {
struct can_priv can;
struct can_berr_counter bec;
struct kvaser_usb *dev;
struct net_device *netdev;
int channel;
struct completion start_comp, stop_comp, flush_comp;
struct usb_anchor tx_submitted;
spinlock_t tx_contexts_lock; /* lock for active_tx_contexts */
int active_tx_contexts;
struct kvaser_usb_tx_urb_context tx_contexts[];
};
/**
* struct kvaser_usb_dev_ops - Device specific functions
* @dev_set_mode: used for can.do_set_mode
* @dev_set_bittiming: used for can.do_set_bittiming
* @dev_set_data_bittiming: used for can.do_set_data_bittiming
* @dev_get_berr_counter: used for can.do_get_berr_counter
*
* @dev_setup_endpoints: setup USB in and out endpoints
* @dev_init_card: initialize card
* @dev_get_software_info: get software info
* @dev_get_software_details: get software details
* @dev_get_card_info: get card info
* @dev_get_capabilities: discover device capabilities
*
* @dev_set_opt_mode: set ctrlmod
* @dev_start_chip: start the CAN controller
* @dev_stop_chip: stop the CAN controller
* @dev_reset_chip: reset the CAN controller
* @dev_flush_queue: flush outstanding CAN messages
* @dev_read_bulk_callback: handle incoming commands
* @dev_frame_to_cmd: translate struct can_frame into device command
*/
struct kvaser_usb_dev_ops {
int (*dev_set_mode)(struct net_device *netdev, enum can_mode mode);
int (*dev_set_bittiming)(struct net_device *netdev);
int (*dev_set_data_bittiming)(struct net_device *netdev);
int (*dev_get_berr_counter)(const struct net_device *netdev,
struct can_berr_counter *bec);
int (*dev_setup_endpoints)(struct kvaser_usb *dev);
int (*dev_init_card)(struct kvaser_usb *dev);
int (*dev_get_software_info)(struct kvaser_usb *dev);
int (*dev_get_software_details)(struct kvaser_usb *dev);
int (*dev_get_card_info)(struct kvaser_usb *dev);
int (*dev_get_capabilities)(struct kvaser_usb *dev);
int (*dev_set_opt_mode)(const struct kvaser_usb_net_priv *priv);
int (*dev_start_chip)(struct kvaser_usb_net_priv *priv);
int (*dev_stop_chip)(struct kvaser_usb_net_priv *priv);
int (*dev_reset_chip)(struct kvaser_usb *dev, int channel);
int (*dev_flush_queue)(struct kvaser_usb_net_priv *priv);
void (*dev_read_bulk_callback)(struct kvaser_usb *dev, void *buf,
int len);
void *(*dev_frame_to_cmd)(const struct kvaser_usb_net_priv *priv,
const struct sk_buff *skb, int *frame_len,
int *cmd_len, u16 transid);
};
struct kvaser_usb_dev_cfg {
const struct can_clock clock;
const unsigned int timestamp_freq;
const struct can_bittiming_const * const bittiming_const;
const struct can_bittiming_const * const data_bittiming_const;
};
extern const struct kvaser_usb_dev_ops kvaser_usb_hydra_dev_ops;
extern const struct kvaser_usb_dev_ops kvaser_usb_leaf_dev_ops;
int kvaser_usb_recv_cmd(const struct kvaser_usb *dev, void *cmd, int len,
int *actual_len);
int kvaser_usb_send_cmd(const struct kvaser_usb *dev, void *cmd, int len);
int kvaser_usb_send_cmd_async(struct kvaser_usb_net_priv *priv, void *cmd,
int len);
int kvaser_usb_can_rx_over_error(struct net_device *netdev);
#endif /* KVASER_USB_H */
// SPDX-License-Identifier: GPL-2.0
/* Parts of this driver are based on the following:
* - Kvaser linux leaf driver (version 4.78)
* - CAN driver for esd CAN-USB/2
* - Kvaser linux usbcanII driver (version 5.3)
* - Kvaser linux mhydra driver (version 5.24)
*
* Copyright (C) 2002-2018 KVASER AB, Sweden. All rights reserved.
* Copyright (C) 2010 Matthias Fuchs <matthias.fuchs@esd.eu>, esd gmbh
* Copyright (C) 2012 Olivier Sobrie <olivier@sobrie.be>
* Copyright (C) 2015 Valeo S.A.
*/
#include <linux/completion.h>
#include <linux/device.h>
#include <linux/gfp.h>
#include <linux/if.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/netdevice.h>
#include <linux/spinlock.h>
#include <linux/types.h>
#include <linux/usb.h>
#include <linux/can.h>
#include <linux/can/dev.h>
#include <linux/can/error.h>
#include <linux/can/netlink.h>
#include "kvaser_usb.h"
/* Kvaser USB vendor id. */
#define KVASER_VENDOR_ID 0x0bfd
/* Kvaser Leaf USB devices product ids */
#define USB_LEAF_DEVEL_PRODUCT_ID 10
#define USB_LEAF_LITE_PRODUCT_ID 11
#define USB_LEAF_PRO_PRODUCT_ID 12
#define USB_LEAF_SPRO_PRODUCT_ID 14
#define USB_LEAF_PRO_LS_PRODUCT_ID 15
#define USB_LEAF_PRO_SWC_PRODUCT_ID 16
#define USB_LEAF_PRO_LIN_PRODUCT_ID 17
#define USB_LEAF_SPRO_LS_PRODUCT_ID 18
#define USB_LEAF_SPRO_SWC_PRODUCT_ID 19
#define USB_MEMO2_DEVEL_PRODUCT_ID 22
#define USB_MEMO2_HSHS_PRODUCT_ID 23
#define USB_UPRO_HSHS_PRODUCT_ID 24
#define USB_LEAF_LITE_GI_PRODUCT_ID 25
#define USB_LEAF_PRO_OBDII_PRODUCT_ID 26
#define USB_MEMO2_HSLS_PRODUCT_ID 27
#define USB_LEAF_LITE_CH_PRODUCT_ID 28
#define USB_BLACKBIRD_SPRO_PRODUCT_ID 29
#define USB_OEM_MERCURY_PRODUCT_ID 34
#define USB_OEM_LEAF_PRODUCT_ID 35
#define USB_CAN_R_PRODUCT_ID 39
#define USB_LEAF_LITE_V2_PRODUCT_ID 288
#define USB_MINI_PCIE_HS_PRODUCT_ID 289
#define USB_LEAF_LIGHT_HS_V2_OEM_PRODUCT_ID 290
#define USB_USBCAN_LIGHT_2HS_PRODUCT_ID 291
#define USB_MINI_PCIE_2HS_PRODUCT_ID 292
/* Kvaser USBCan-II devices product ids */
#define USB_USBCAN_REVB_PRODUCT_ID 2
#define USB_VCI2_PRODUCT_ID 3
#define USB_USBCAN2_PRODUCT_ID 4
#define USB_MEMORATOR_PRODUCT_ID 5
/* Kvaser Minihydra USB devices product ids */
#define USB_BLACKBIRD_V2_PRODUCT_ID 258
#define USB_MEMO_PRO_5HS_PRODUCT_ID 260
#define USB_USBCAN_PRO_5HS_PRODUCT_ID 261
#define USB_USBCAN_LIGHT_4HS_PRODUCT_ID 262
#define USB_LEAF_PRO_HS_V2_PRODUCT_ID 263
#define USB_USBCAN_PRO_2HS_V2_PRODUCT_ID 264
#define USB_MEMO_2HS_PRODUCT_ID 265
#define USB_MEMO_PRO_2HS_V2_PRODUCT_ID 266
#define USB_HYBRID_CANLIN_PRODUCT_ID 267
#define USB_ATI_USBCAN_PRO_2HS_V2_PRODUCT_ID 268
#define USB_ATI_MEMO_PRO_2HS_V2_PRODUCT_ID 269
#define USB_HYBRID_PRO_CANLIN_PRODUCT_ID 270
static inline bool kvaser_is_leaf(const struct usb_device_id *id)
{
return (id->idProduct >= USB_LEAF_DEVEL_PRODUCT_ID &&
id->idProduct <= USB_CAN_R_PRODUCT_ID) ||
(id->idProduct >= USB_LEAF_LITE_V2_PRODUCT_ID &&
id->idProduct <= USB_MINI_PCIE_2HS_PRODUCT_ID);
}
static inline bool kvaser_is_usbcan(const struct usb_device_id *id)
{
return id->idProduct >= USB_USBCAN_REVB_PRODUCT_ID &&
id->idProduct <= USB_MEMORATOR_PRODUCT_ID;
}
static inline bool kvaser_is_hydra(const struct usb_device_id *id)
{
return id->idProduct >= USB_BLACKBIRD_V2_PRODUCT_ID &&
id->idProduct <= USB_HYBRID_PRO_CANLIN_PRODUCT_ID;
}
static const struct usb_device_id kvaser_usb_table[] = {
/* Leaf USB product IDs */
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_DEVEL_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_PRODUCT_ID),
.driver_info = KVASER_USB_HAS_TXRX_ERRORS |
KVASER_USB_HAS_SILENT_MODE },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_SPRO_PRODUCT_ID),
.driver_info = KVASER_USB_HAS_TXRX_ERRORS |
KVASER_USB_HAS_SILENT_MODE },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_LS_PRODUCT_ID),
.driver_info = KVASER_USB_HAS_TXRX_ERRORS |
KVASER_USB_HAS_SILENT_MODE },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_SWC_PRODUCT_ID),
.driver_info = KVASER_USB_HAS_TXRX_ERRORS |
KVASER_USB_HAS_SILENT_MODE },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_LIN_PRODUCT_ID),
.driver_info = KVASER_USB_HAS_TXRX_ERRORS |
KVASER_USB_HAS_SILENT_MODE },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_SPRO_LS_PRODUCT_ID),
.driver_info = KVASER_USB_HAS_TXRX_ERRORS |
KVASER_USB_HAS_SILENT_MODE },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_SPRO_SWC_PRODUCT_ID),
.driver_info = KVASER_USB_HAS_TXRX_ERRORS |
KVASER_USB_HAS_SILENT_MODE },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO2_DEVEL_PRODUCT_ID),
.driver_info = KVASER_USB_HAS_TXRX_ERRORS |
KVASER_USB_HAS_SILENT_MODE },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO2_HSHS_PRODUCT_ID),
.driver_info = KVASER_USB_HAS_TXRX_ERRORS |
KVASER_USB_HAS_SILENT_MODE },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_UPRO_HSHS_PRODUCT_ID),
.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_GI_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_OBDII_PRODUCT_ID),
.driver_info = KVASER_USB_HAS_TXRX_ERRORS |
KVASER_USB_HAS_SILENT_MODE },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO2_HSLS_PRODUCT_ID),
.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_CH_PRODUCT_ID),
.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_BLACKBIRD_SPRO_PRODUCT_ID),
.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_OEM_MERCURY_PRODUCT_ID),
.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_OEM_LEAF_PRODUCT_ID),
.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_CAN_R_PRODUCT_ID),
.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_V2_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_HS_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_HS_V2_OEM_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_2HS_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_2HS_PRODUCT_ID) },
/* USBCANII USB product IDs */
{ USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN2_PRODUCT_ID),
.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_REVB_PRODUCT_ID),
.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_MEMORATOR_PRODUCT_ID),
.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_VCI2_PRODUCT_ID),
.driver_info = KVASER_USB_HAS_TXRX_ERRORS },
/* Minihydra USB product IDs */
{ USB_DEVICE(KVASER_VENDOR_ID, USB_BLACKBIRD_V2_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_5HS_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_5HS_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_4HS_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_HS_V2_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_2HS_V2_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_2HS_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_2HS_V2_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_CANLIN_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_USBCAN_PRO_2HS_V2_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_MEMO_PRO_2HS_V2_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_PRO_CANLIN_PRODUCT_ID) },
{ }
};
MODULE_DEVICE_TABLE(usb, kvaser_usb_table);
int kvaser_usb_send_cmd(const struct kvaser_usb *dev, void *cmd, int len)
{
int actual_len; /* Not used */
return usb_bulk_msg(dev->udev,
usb_sndbulkpipe(dev->udev,
dev->bulk_out->bEndpointAddress),
cmd, len, &actual_len, KVASER_USB_TIMEOUT);
}
int kvaser_usb_recv_cmd(const struct kvaser_usb *dev, void *cmd, int len,
int *actual_len)
{
return usb_bulk_msg(dev->udev,
usb_rcvbulkpipe(dev->udev,
dev->bulk_in->bEndpointAddress),
cmd, len, actual_len, KVASER_USB_TIMEOUT);
}
static void kvaser_usb_send_cmd_callback(struct urb *urb)
{
struct net_device *netdev = urb->context;
kfree(urb->transfer_buffer);
if (urb->status)
netdev_warn(netdev, "urb status received: %d\n", urb->status);
}
int kvaser_usb_send_cmd_async(struct kvaser_usb_net_priv *priv, void *cmd,
int len)
{
struct kvaser_usb *dev = priv->dev;
struct net_device *netdev = priv->netdev;
struct urb *urb;
int err;
urb = usb_alloc_urb(0, GFP_ATOMIC);
if (!urb)
return -ENOMEM;
usb_fill_bulk_urb(urb, dev->udev,
usb_sndbulkpipe(dev->udev,
dev->bulk_out->bEndpointAddress),
cmd, len, kvaser_usb_send_cmd_callback, netdev);
usb_anchor_urb(urb, &priv->tx_submitted);
err = usb_submit_urb(urb, GFP_ATOMIC);
if (err) {
netdev_err(netdev, "Error transmitting URB\n");
usb_unanchor_urb(urb);
}
usb_free_urb(urb);
return 0;
}
int kvaser_usb_can_rx_over_error(struct net_device *netdev)
{
struct net_device_stats *stats = &netdev->stats;
struct can_frame *cf;
struct sk_buff *skb;
stats->rx_over_errors++;
stats->rx_errors++;
skb = alloc_can_err_skb(netdev, &cf);
if (!skb) {
stats->rx_dropped++;
netdev_warn(netdev, "No memory left for err_skb\n");
return -ENOMEM;
}
cf->can_id |= CAN_ERR_CRTL;
cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
stats->rx_packets++;
stats->rx_bytes += cf->can_dlc;
netif_rx(skb);
return 0;
}
static void kvaser_usb_read_bulk_callback(struct urb *urb)
{
struct kvaser_usb *dev = urb->context;
int err;
unsigned int i;
switch (urb->status) {
case 0:
break;
case -ENOENT:
case -EPIPE:
case -EPROTO:
case -ESHUTDOWN:
return;
default:
dev_info(&dev->intf->dev, "Rx URB aborted (%d)\n", urb->status);
goto resubmit_urb;
}
dev->ops->dev_read_bulk_callback(dev, urb->transfer_buffer,
urb->actual_length);
resubmit_urb:
usb_fill_bulk_urb(urb, dev->udev,
usb_rcvbulkpipe(dev->udev,
dev->bulk_in->bEndpointAddress),
urb->transfer_buffer, KVASER_USB_RX_BUFFER_SIZE,
kvaser_usb_read_bulk_callback, dev);
err = usb_submit_urb(urb, GFP_ATOMIC);
if (err == -ENODEV) {
for (i = 0; i < dev->nchannels; i++) {
if (!dev->nets[i])
continue;
netif_device_detach(dev->nets[i]->netdev);
}
} else if (err) {
dev_err(&dev->intf->dev,
"Failed resubmitting read bulk urb: %d\n", err);
}
}
static int kvaser_usb_setup_rx_urbs(struct kvaser_usb *dev)
{
int i, err = 0;
if (dev->rxinitdone)
return 0;
for (i = 0; i < KVASER_USB_MAX_RX_URBS; i++) {
struct urb *urb = NULL;
u8 *buf = NULL;
dma_addr_t buf_dma;
urb = usb_alloc_urb(0, GFP_KERNEL);
if (!urb) {
err = -ENOMEM;
break;
}
buf = usb_alloc_coherent(dev->udev, KVASER_USB_RX_BUFFER_SIZE,
GFP_KERNEL, &buf_dma);
if (!buf) {
dev_warn(&dev->intf->dev,
"No memory left for USB buffer\n");
usb_free_urb(urb);
err = -ENOMEM;
break;
}
usb_fill_bulk_urb(urb, dev->udev,
usb_rcvbulkpipe
(dev->udev,
dev->bulk_in->bEndpointAddress),
buf, KVASER_USB_RX_BUFFER_SIZE,
kvaser_usb_read_bulk_callback, dev);
urb->transfer_dma = buf_dma;
urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
usb_anchor_urb(urb, &dev->rx_submitted);
err = usb_submit_urb(urb, GFP_KERNEL);
if (err) {
usb_unanchor_urb(urb);
usb_free_coherent(dev->udev,
KVASER_USB_RX_BUFFER_SIZE, buf,
buf_dma);
usb_free_urb(urb);
break;
}
dev->rxbuf[i] = buf;
dev->rxbuf_dma[i] = buf_dma;
usb_free_urb(urb);
}
if (i == 0) {
dev_warn(&dev->intf->dev, "Cannot setup read URBs, error %d\n",
err);
return err;
} else if (i < KVASER_USB_MAX_RX_URBS) {
dev_warn(&dev->intf->dev, "RX performances may be slow\n");
}
dev->rxinitdone = true;
return 0;
}
static int kvaser_usb_open(struct net_device *netdev)
{
struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
struct kvaser_usb *dev = priv->dev;
int err;
err = open_candev(netdev);
if (err)
return err;
err = kvaser_usb_setup_rx_urbs(dev);
if (err)
goto error;
err = dev->ops->dev_set_opt_mode(priv);
if (err)
goto error;
err = dev->ops->dev_start_chip(priv);
if (err) {
netdev_warn(netdev, "Cannot start device, error %d\n", err);
goto error;
}
priv->can.state = CAN_STATE_ERROR_ACTIVE;
return 0;
error:
close_candev(netdev);
return err;
}
static void kvaser_usb_reset_tx_urb_contexts(struct kvaser_usb_net_priv *priv)
{
int i, max_tx_urbs;
max_tx_urbs = priv->dev->max_tx_urbs;
priv->active_tx_contexts = 0;
for (i = 0; i < max_tx_urbs; i++)
priv->tx_contexts[i].echo_index = max_tx_urbs;
}
/* This method might sleep. Do not call it in the atomic context
* of URB completions.
*/
static void kvaser_usb_unlink_tx_urbs(struct kvaser_usb_net_priv *priv)
{
usb_kill_anchored_urbs(&priv->tx_submitted);
kvaser_usb_reset_tx_urb_contexts(priv);
}
static void kvaser_usb_unlink_all_urbs(struct kvaser_usb *dev)
{
int i;
usb_kill_anchored_urbs(&dev->rx_submitted);
for (i = 0; i < KVASER_USB_MAX_RX_URBS; i++)
usb_free_coherent(dev->udev, KVASER_USB_RX_BUFFER_SIZE,
dev->rxbuf[i], dev->rxbuf_dma[i]);
for (i = 0; i < dev->nchannels; i++) {
struct kvaser_usb_net_priv *priv = dev->nets[i];
if (priv)
kvaser_usb_unlink_tx_urbs(priv);
}
}
static int kvaser_usb_close(struct net_device *netdev)
{
struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
struct kvaser_usb *dev = priv->dev;
int err;
netif_stop_queue(netdev);
err = dev->ops->dev_flush_queue(priv);
if (err)
netdev_warn(netdev, "Cannot flush queue, error %d\n", err);
if (dev->ops->dev_reset_chip) {
err = dev->ops->dev_reset_chip(dev, priv->channel);
if (err)
netdev_warn(netdev, "Cannot reset card, error %d\n",
err);
}
err = dev->ops->dev_stop_chip(priv);
if (err)
netdev_warn(netdev, "Cannot stop device, error %d\n", err);
/* reset tx contexts */
kvaser_usb_unlink_tx_urbs(priv);
priv->can.state = CAN_STATE_STOPPED;
close_candev(priv->netdev);
return 0;
}
static void kvaser_usb_write_bulk_callback(struct urb *urb)
{
struct kvaser_usb_tx_urb_context *context = urb->context;
struct kvaser_usb_net_priv *priv;
struct net_device *netdev;
if (WARN_ON(!context))
return;
priv = context->priv;
netdev = priv->netdev;
kfree(urb->transfer_buffer);
if (!netif_device_present(netdev))
return;
if (urb->status)
netdev_info(netdev, "Tx URB aborted (%d)\n", urb->status);
}
static netdev_tx_t kvaser_usb_start_xmit(struct sk_buff *skb,
struct net_device *netdev)
{
struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
struct kvaser_usb *dev = priv->dev;
struct net_device_stats *stats = &netdev->stats;
struct kvaser_usb_tx_urb_context *context = NULL;
struct urb *urb;
void *buf;
int cmd_len = 0;
int err, ret = NETDEV_TX_OK;
unsigned int i;
unsigned long flags;
if (can_dropped_invalid_skb(netdev, skb))
return NETDEV_TX_OK;
urb = usb_alloc_urb(0, GFP_ATOMIC);
if (!urb) {
stats->tx_dropped++;
dev_kfree_skb(skb);
return NETDEV_TX_OK;
}
spin_lock_irqsave(&priv->tx_contexts_lock, flags);
for (i = 0; i < dev->max_tx_urbs; i++) {
if (priv->tx_contexts[i].echo_index == dev->max_tx_urbs) {
context = &priv->tx_contexts[i];
context->echo_index = i;
can_put_echo_skb(skb, netdev, context->echo_index);
++priv->active_tx_contexts;
if (priv->active_tx_contexts >= (int)dev->max_tx_urbs)
netif_stop_queue(netdev);
break;
}
}
spin_unlock_irqrestore(&priv->tx_contexts_lock, flags);
/* This should never happen; it implies a flow control bug */
if (!context) {
netdev_warn(netdev, "cannot find free context\n");
ret = NETDEV_TX_BUSY;
goto freeurb;
}
buf = dev->ops->dev_frame_to_cmd(priv, skb, &context->dlc, &cmd_len,
context->echo_index);
if (!buf) {
stats->tx_dropped++;
dev_kfree_skb(skb);
spin_lock_irqsave(&priv->tx_contexts_lock, flags);
can_free_echo_skb(netdev, context->echo_index);
context->echo_index = dev->max_tx_urbs;
--priv->active_tx_contexts;
netif_wake_queue(netdev);
spin_unlock_irqrestore(&priv->tx_contexts_lock, flags);
goto freeurb;
}
context->priv = priv;
usb_fill_bulk_urb(urb, dev->udev,
usb_sndbulkpipe(dev->udev,
dev->bulk_out->bEndpointAddress),
buf, cmd_len, kvaser_usb_write_bulk_callback,
context);
usb_anchor_urb(urb, &priv->tx_submitted);
err = usb_submit_urb(urb, GFP_ATOMIC);
if (unlikely(err)) {
spin_lock_irqsave(&priv->tx_contexts_lock, flags);
can_free_echo_skb(netdev, context->echo_index);
context->echo_index = dev->max_tx_urbs;
--priv->active_tx_contexts;
netif_wake_queue(netdev);
spin_unlock_irqrestore(&priv->tx_contexts_lock, flags);
usb_unanchor_urb(urb);
kfree(buf);
stats->tx_dropped++;
if (err == -ENODEV)
netif_device_detach(netdev);
else
netdev_warn(netdev, "Failed tx_urb %d\n", err);
goto freeurb;
}
ret = NETDEV_TX_OK;
freeurb:
usb_free_urb(urb);
return ret;
}
static const struct net_device_ops kvaser_usb_netdev_ops = {
.ndo_open = kvaser_usb_open,
.ndo_stop = kvaser_usb_close,
.ndo_start_xmit = kvaser_usb_start_xmit,
.ndo_change_mtu = can_change_mtu,
};
static void kvaser_usb_remove_interfaces(struct kvaser_usb *dev)
{
int i;
for (i = 0; i < dev->nchannels; i++) {
if (!dev->nets[i])
continue;
unregister_candev(dev->nets[i]->netdev);
}
kvaser_usb_unlink_all_urbs(dev);
for (i = 0; i < dev->nchannels; i++) {
if (!dev->nets[i])
continue;
free_candev(dev->nets[i]->netdev);
}
}
static int kvaser_usb_init_one(struct kvaser_usb *dev,
const struct usb_device_id *id, int channel)
{
struct net_device *netdev;
struct kvaser_usb_net_priv *priv;
int err;
if (dev->ops->dev_reset_chip) {
err = dev->ops->dev_reset_chip(dev, channel);
if (err)
return err;
}
netdev = alloc_candev(sizeof(*priv) +
dev->max_tx_urbs * sizeof(*priv->tx_contexts),
dev->max_tx_urbs);
if (!netdev) {
dev_err(&dev->intf->dev, "Cannot alloc candev\n");
return -ENOMEM;
}
priv = netdev_priv(netdev);
init_usb_anchor(&priv->tx_submitted);
init_completion(&priv->start_comp);
init_completion(&priv->stop_comp);
priv->can.ctrlmode_supported = 0;
priv->dev = dev;
priv->netdev = netdev;
priv->channel = channel;
spin_lock_init(&priv->tx_contexts_lock);
kvaser_usb_reset_tx_urb_contexts(priv);
priv->can.state = CAN_STATE_STOPPED;
priv->can.clock.freq = dev->cfg->clock.freq;
priv->can.bittiming_const = dev->cfg->bittiming_const;
priv->can.do_set_bittiming = dev->ops->dev_set_bittiming;
priv->can.do_set_mode = dev->ops->dev_set_mode;
if ((id->driver_info & KVASER_USB_HAS_TXRX_ERRORS) ||
(priv->dev->card_data.capabilities & KVASER_USB_CAP_BERR_CAP))
priv->can.do_get_berr_counter = dev->ops->dev_get_berr_counter;
if (id->driver_info & KVASER_USB_HAS_SILENT_MODE)
priv->can.ctrlmode_supported |= CAN_CTRLMODE_LISTENONLY;
priv->can.ctrlmode_supported |= dev->card_data.ctrlmode_supported;
if (priv->can.ctrlmode_supported & CAN_CTRLMODE_FD) {
priv->can.data_bittiming_const = dev->cfg->data_bittiming_const;
priv->can.do_set_data_bittiming =
dev->ops->dev_set_data_bittiming;
}
netdev->flags |= IFF_ECHO;
netdev->netdev_ops = &kvaser_usb_netdev_ops;
SET_NETDEV_DEV(netdev, &dev->intf->dev);
netdev->dev_id = channel;
dev->nets[channel] = priv;
err = register_candev(netdev);
if (err) {
dev_err(&dev->intf->dev, "Failed to register CAN device\n");
free_candev(netdev);
dev->nets[channel] = NULL;
return err;
}
netdev_dbg(netdev, "device registered\n");
return 0;
}
static int kvaser_usb_probe(struct usb_interface *intf,
const struct usb_device_id *id)
{
struct kvaser_usb *dev;
int err;
int i;
dev = devm_kzalloc(&intf->dev, sizeof(*dev), GFP_KERNEL);
if (!dev)
return -ENOMEM;
if (kvaser_is_leaf(id)) {
dev->card_data.leaf.family = KVASER_LEAF;
dev->ops = &kvaser_usb_leaf_dev_ops;
} else if (kvaser_is_usbcan(id)) {
dev->card_data.leaf.family = KVASER_USBCAN;
dev->ops = &kvaser_usb_leaf_dev_ops;
} else if (kvaser_is_hydra(id)) {
dev->ops = &kvaser_usb_hydra_dev_ops;
} else {
dev_err(&intf->dev,
"Product ID (%d) is not a supported Kvaser USB device\n",
id->idProduct);
return -ENODEV;
}
dev->intf = intf;
err = dev->ops->dev_setup_endpoints(dev);
if (err) {
dev_err(&intf->dev, "Cannot get usb endpoint(s)");
return err;
}
dev->udev = interface_to_usbdev(intf);
init_usb_anchor(&dev->rx_submitted);
usb_set_intfdata(intf, dev);
dev->card_data.ctrlmode_supported = 0;
dev->card_data.capabilities = 0;
err = dev->ops->dev_init_card(dev);
if (err) {
dev_err(&intf->dev,
"Failed to initialize card, error %d\n", err);
return err;
}
err = dev->ops->dev_get_software_info(dev);
if (err) {
dev_err(&intf->dev,
"Cannot get software info, error %d\n", err);
return err;
}
if (dev->ops->dev_get_software_details) {
err = dev->ops->dev_get_software_details(dev);
if (err) {
dev_err(&intf->dev,
"Cannot get software details, error %d\n", err);
return err;
}
}
if (WARN_ON(!dev->cfg))
return -ENODEV;
dev_dbg(&intf->dev, "Firmware version: %d.%d.%d\n",
((dev->fw_version >> 24) & 0xff),
((dev->fw_version >> 16) & 0xff),
(dev->fw_version & 0xffff));
dev_dbg(&intf->dev, "Max outstanding tx = %d URBs\n", dev->max_tx_urbs);
err = dev->ops->dev_get_card_info(dev);
if (err) {
dev_err(&intf->dev, "Cannot get card info, error %d\n", err);
return err;
}
if (dev->ops->dev_get_capabilities) {
err = dev->ops->dev_get_capabilities(dev);
if (err) {
dev_err(&intf->dev,
"Cannot get capabilities, error %d\n", err);
kvaser_usb_remove_interfaces(dev);
return err;
}
}
for (i = 0; i < dev->nchannels; i++) {
err = kvaser_usb_init_one(dev, id, i);
if (err) {
kvaser_usb_remove_interfaces(dev);
return err;
}
}
return 0;
}
static void kvaser_usb_disconnect(struct usb_interface *intf)
{
struct kvaser_usb *dev = usb_get_intfdata(intf);
usb_set_intfdata(intf, NULL);
if (!dev)
return;
kvaser_usb_remove_interfaces(dev);
}
static struct usb_driver kvaser_usb_driver = {
.name = "kvaser_usb",
.probe = kvaser_usb_probe,
.disconnect = kvaser_usb_disconnect,
.id_table = kvaser_usb_table,
};
module_usb_driver(kvaser_usb_driver);
MODULE_AUTHOR("Olivier Sobrie <olivier@sobrie.be>");
MODULE_AUTHOR("Kvaser AB <support@kvaser.com>");
MODULE_DESCRIPTION("CAN driver for Kvaser CAN/USB devices");
MODULE_LICENSE("GPL v2");
// SPDX-License-Identifier: GPL-2.0
/* Parts of this driver are based on the following:
* - Kvaser linux mhydra driver (version 5.24)
* - CAN driver for esd CAN-USB/2
*
* Copyright (C) 2018 KVASER AB, Sweden. All rights reserved.
* Copyright (C) 2010 Matthias Fuchs <matthias.fuchs@esd.eu>, esd gmbh
*
* Known issues:
* - Transition from CAN_STATE_ERROR_WARNING to CAN_STATE_ERROR_ACTIVE is only
* reported after a call to do_get_berr_counter(), since firmware does not
* distinguish between ERROR_WARNING and ERROR_ACTIVE.
* - Hardware timestamps are not set for CAN Tx frames.
*/
#include <linux/completion.h>
#include <linux/device.h>
#include <linux/gfp.h>
#include <linux/jiffies.h>
#include <linux/kernel.h>
#include <linux/netdevice.h>
#include <linux/spinlock.h>
#include <linux/string.h>
#include <linux/types.h>
#include <linux/usb.h>
#include <linux/can.h>
#include <linux/can/dev.h>
#include <linux/can/error.h>
#include <linux/can/netlink.h>
#include "kvaser_usb.h"
/* Forward declarations */
static const struct kvaser_usb_dev_cfg kvaser_usb_hydra_dev_cfg_kcan;
static const struct kvaser_usb_dev_cfg kvaser_usb_hydra_dev_cfg_flexc;
#define KVASER_USB_HYDRA_BULK_EP_IN_ADDR 0x82
#define KVASER_USB_HYDRA_BULK_EP_OUT_ADDR 0x02
#define KVASER_USB_HYDRA_MAX_TRANSID 0xff
#define KVASER_USB_HYDRA_MIN_TRANSID 0x01
/* Minihydra command IDs */
#define CMD_SET_BUSPARAMS_REQ 16
#define CMD_GET_CHIP_STATE_REQ 19
#define CMD_CHIP_STATE_EVENT 20
#define CMD_SET_DRIVERMODE_REQ 21
#define CMD_START_CHIP_REQ 26
#define CMD_START_CHIP_RESP 27
#define CMD_STOP_CHIP_REQ 28
#define CMD_STOP_CHIP_RESP 29
#define CMD_TX_CAN_MESSAGE 33
#define CMD_GET_CARD_INFO_REQ 34
#define CMD_GET_CARD_INFO_RESP 35
#define CMD_GET_SOFTWARE_INFO_REQ 38
#define CMD_GET_SOFTWARE_INFO_RESP 39
#define CMD_ERROR_EVENT 45
#define CMD_FLUSH_QUEUE 48
#define CMD_TX_ACKNOWLEDGE 50
#define CMD_FLUSH_QUEUE_RESP 66
#define CMD_SET_BUSPARAMS_FD_REQ 69
#define CMD_SET_BUSPARAMS_FD_RESP 70
#define CMD_SET_BUSPARAMS_RESP 85
#define CMD_GET_CAPABILITIES_REQ 95
#define CMD_GET_CAPABILITIES_RESP 96
#define CMD_RX_MESSAGE 106
#define CMD_MAP_CHANNEL_REQ 200
#define CMD_MAP_CHANNEL_RESP 201
#define CMD_GET_SOFTWARE_DETAILS_REQ 202
#define CMD_GET_SOFTWARE_DETAILS_RESP 203
#define CMD_EXTENDED 255
/* Minihydra extended command IDs */
#define CMD_TX_CAN_MESSAGE_FD 224
#define CMD_TX_ACKNOWLEDGE_FD 225
#define CMD_RX_MESSAGE_FD 226
/* Hydra commands are handled by different threads in firmware.
* The threads are denoted hydra entity (HE). Each HE got a unique 6-bit
* address. The address is used in hydra commands to get/set source and
* destination HE. There are two predefined HE addresses, the remaining
* addresses are different between devices and firmware versions. Hence, we need
* to enumerate the addresses (see kvaser_usb_hydra_map_channel()).
*/
/* Well-known HE addresses */
#define KVASER_USB_HYDRA_HE_ADDRESS_ROUTER 0x00
#define KVASER_USB_HYDRA_HE_ADDRESS_ILLEGAL 0x3e
#define KVASER_USB_HYDRA_TRANSID_CANHE 0x40
#define KVASER_USB_HYDRA_TRANSID_SYSDBG 0x61
struct kvaser_cmd_map_ch_req {
char name[16];
u8 channel;
u8 reserved[11];
} __packed;
struct kvaser_cmd_map_ch_res {
u8 he_addr;
u8 channel;
u8 reserved[26];
} __packed;
struct kvaser_cmd_card_info {
__le32 serial_number;
__le32 clock_res;
__le32 mfg_date;
__le32 ean[2];
u8 hw_version;
u8 usb_mode;
u8 hw_type;
u8 reserved0;
u8 nchannels;
u8 reserved1[3];
} __packed;
struct kvaser_cmd_sw_info {
u8 reserved0[8];
__le16 max_outstanding_tx;
u8 reserved1[18];
} __packed;
struct kvaser_cmd_sw_detail_req {
u8 use_ext_cmd;
u8 reserved[27];
} __packed;
/* Software detail flags */
#define KVASER_USB_HYDRA_SW_FLAG_FW_BETA BIT(2)
#define KVASER_USB_HYDRA_SW_FLAG_FW_BAD BIT(4)
#define KVASER_USB_HYDRA_SW_FLAG_FREQ_80M BIT(5)
#define KVASER_USB_HYDRA_SW_FLAG_EXT_CMD BIT(9)
#define KVASER_USB_HYDRA_SW_FLAG_CANFD BIT(10)
#define KVASER_USB_HYDRA_SW_FLAG_NONISO BIT(11)
#define KVASER_USB_HYDRA_SW_FLAG_EXT_CAP BIT(12)
struct kvaser_cmd_sw_detail_res {
__le32 sw_flags;
__le32 sw_version;
__le32 sw_name;
__le32 ean[2];
__le32 max_bitrate;
u8 reserved[4];
} __packed;
/* Sub commands for cap_req and cap_res */
#define KVASER_USB_HYDRA_CAP_CMD_LISTEN_MODE 0x02
#define KVASER_USB_HYDRA_CAP_CMD_ERR_REPORT 0x05
#define KVASER_USB_HYDRA_CAP_CMD_ONE_SHOT 0x06
struct kvaser_cmd_cap_req {
__le16 cap_cmd;
u8 reserved[26];
} __packed;
/* Status codes for cap_res */
#define KVASER_USB_HYDRA_CAP_STAT_OK 0x00
#define KVASER_USB_HYDRA_CAP_STAT_NOT_IMPL 0x01
#define KVASER_USB_HYDRA_CAP_STAT_UNAVAIL 0x02
struct kvaser_cmd_cap_res {
__le16 cap_cmd;
__le16 status;
__le32 mask;
__le32 value;
u8 reserved[16];
} __packed;
/* CMD_ERROR_EVENT error codes */
#define KVASER_USB_HYDRA_ERROR_EVENT_CAN 0x01
#define KVASER_USB_HYDRA_ERROR_EVENT_PARAM 0x09
struct kvaser_cmd_error_event {
__le16 timestamp[3];
u8 reserved;
u8 error_code;
__le16 info1;
__le16 info2;
} __packed;
/* Chip state status flags. Used for chip_state_event and err_frame_data. */
#define KVASER_USB_HYDRA_BUS_ERR_ACT 0x00
#define KVASER_USB_HYDRA_BUS_ERR_PASS BIT(5)
#define KVASER_USB_HYDRA_BUS_BUS_OFF BIT(6)
struct kvaser_cmd_chip_state_event {
__le16 timestamp[3];
u8 tx_err_counter;
u8 rx_err_counter;
u8 bus_status;
u8 reserved[19];
} __packed;
/* Busparam modes */
#define KVASER_USB_HYDRA_BUS_MODE_CAN 0x00
#define KVASER_USB_HYDRA_BUS_MODE_CANFD_ISO 0x01
#define KVASER_USB_HYDRA_BUS_MODE_NONISO 0x02
struct kvaser_cmd_set_busparams {
__le32 bitrate;
u8 tseg1;
u8 tseg2;
u8 sjw;
u8 nsamples;
u8 reserved0[4];
__le32 bitrate_d;
u8 tseg1_d;
u8 tseg2_d;
u8 sjw_d;
u8 nsamples_d;
u8 canfd_mode;
u8 reserved1[7];
} __packed;
/* Ctrl modes */
#define KVASER_USB_HYDRA_CTRLMODE_NORMAL 0x01
#define KVASER_USB_HYDRA_CTRLMODE_LISTEN 0x02
struct kvaser_cmd_set_ctrlmode {
u8 mode;
u8 reserved[27];
} __packed;
struct kvaser_err_frame_data {
u8 bus_status;
u8 reserved0;
u8 tx_err_counter;
u8 rx_err_counter;
u8 reserved1[4];
} __packed;
struct kvaser_cmd_rx_can {
u8 cmd_len;
u8 cmd_no;
u8 channel;
u8 flags;
__le16 timestamp[3];
u8 dlc;
u8 padding;
__le32 id;
union {
u8 data[8];
struct kvaser_err_frame_data err_frame_data;
};
} __packed;
/* Extended CAN ID flag. Used in rx_can and tx_can */
#define KVASER_USB_HYDRA_EXTENDED_FRAME_ID BIT(31)
struct kvaser_cmd_tx_can {
__le32 id;
u8 data[8];
u8 dlc;
u8 flags;
__le16 transid;
u8 channel;
u8 reserved[11];
} __packed;
struct kvaser_cmd_header {
u8 cmd_no;
/* The destination HE address is stored in 0..5 of he_addr.
* The upper part of source HE address is stored in 6..7 of he_addr, and
* the lower part is stored in 12..15 of transid.
*/
u8 he_addr;
__le16 transid;
} __packed;
struct kvaser_cmd {
struct kvaser_cmd_header header;
union {
struct kvaser_cmd_map_ch_req map_ch_req;
struct kvaser_cmd_map_ch_res map_ch_res;
struct kvaser_cmd_card_info card_info;
struct kvaser_cmd_sw_info sw_info;
struct kvaser_cmd_sw_detail_req sw_detail_req;
struct kvaser_cmd_sw_detail_res sw_detail_res;
struct kvaser_cmd_cap_req cap_req;
struct kvaser_cmd_cap_res cap_res;
struct kvaser_cmd_error_event error_event;
struct kvaser_cmd_set_busparams set_busparams_req;
struct kvaser_cmd_chip_state_event chip_state_event;
struct kvaser_cmd_set_ctrlmode set_ctrlmode;
struct kvaser_cmd_rx_can rx_can;
struct kvaser_cmd_tx_can tx_can;
} __packed;
} __packed;
/* CAN frame flags. Used in rx_can, ext_rx_can, tx_can and ext_tx_can */
#define KVASER_USB_HYDRA_CF_FLAG_ERROR_FRAME BIT(0)
#define KVASER_USB_HYDRA_CF_FLAG_OVERRUN BIT(1)
#define KVASER_USB_HYDRA_CF_FLAG_REMOTE_FRAME BIT(4)
#define KVASER_USB_HYDRA_CF_FLAG_EXTENDED_ID BIT(5)
/* CAN frame flags. Used in ext_rx_can and ext_tx_can */
#define KVASER_USB_HYDRA_CF_FLAG_OSM_NACK BIT(12)
#define KVASER_USB_HYDRA_CF_FLAG_ABL BIT(13)
#define KVASER_USB_HYDRA_CF_FLAG_FDF BIT(16)
#define KVASER_USB_HYDRA_CF_FLAG_BRS BIT(17)
#define KVASER_USB_HYDRA_CF_FLAG_ESI BIT(18)
/* KCAN packet header macros. Used in ext_rx_can and ext_tx_can */
#define KVASER_USB_KCAN_DATA_DLC_BITS 4
#define KVASER_USB_KCAN_DATA_DLC_SHIFT 8
#define KVASER_USB_KCAN_DATA_DLC_MASK \
GENMASK(KVASER_USB_KCAN_DATA_DLC_BITS - 1 + \
KVASER_USB_KCAN_DATA_DLC_SHIFT, \
KVASER_USB_KCAN_DATA_DLC_SHIFT)
#define KVASER_USB_KCAN_DATA_BRS BIT(14)
#define KVASER_USB_KCAN_DATA_FDF BIT(15)
#define KVASER_USB_KCAN_DATA_OSM BIT(16)
#define KVASER_USB_KCAN_DATA_AREQ BIT(31)
#define KVASER_USB_KCAN_DATA_SRR BIT(31)
#define KVASER_USB_KCAN_DATA_RTR BIT(29)
#define KVASER_USB_KCAN_DATA_IDE BIT(30)
struct kvaser_cmd_ext_rx_can {
__le32 flags;
__le32 id;
__le32 kcan_id;
__le32 kcan_header;
__le64 timestamp;
union {
u8 kcan_payload[64];
struct kvaser_err_frame_data err_frame_data;
};
} __packed;
struct kvaser_cmd_ext_tx_can {
__le32 flags;
__le32 id;
__le32 kcan_id;
__le32 kcan_header;
u8 databytes;
u8 dlc;
u8 reserved[6];
u8 kcan_payload[64];
} __packed;
struct kvaser_cmd_ext_tx_ack {
__le32 flags;
u8 reserved0[4];
__le64 timestamp;
u8 reserved1[8];
} __packed;
/* struct for extended commands (CMD_EXTENDED) */
struct kvaser_cmd_ext {
struct kvaser_cmd_header header;
__le16 len;
u8 cmd_no_ext;
u8 reserved;
union {
struct kvaser_cmd_ext_rx_can rx_can;
struct kvaser_cmd_ext_tx_can tx_can;
struct kvaser_cmd_ext_tx_ack tx_ack;
} __packed;
} __packed;
static const struct can_bittiming_const kvaser_usb_hydra_kcan_bittiming_c = {
.name = "kvaser_usb_kcan",
.tseg1_min = 1,
.tseg1_max = 255,
.tseg2_min = 1,
.tseg2_max = 32,
.sjw_max = 16,
.brp_min = 1,
.brp_max = 4096,
.brp_inc = 1,
};
static const struct can_bittiming_const kvaser_usb_hydra_flexc_bittiming_c = {
.name = "kvaser_usb_flex",
.tseg1_min = 4,
.tseg1_max = 16,
.tseg2_min = 2,
.tseg2_max = 8,
.sjw_max = 4,
.brp_min = 1,
.brp_max = 256,
.brp_inc = 1,
};
#define KVASER_USB_HYDRA_TRANSID_BITS 12
#define KVASER_USB_HYDRA_TRANSID_MASK \
GENMASK(KVASER_USB_HYDRA_TRANSID_BITS - 1, 0)
#define KVASER_USB_HYDRA_HE_ADDR_SRC_MASK GENMASK(7, 6)
#define KVASER_USB_HYDRA_HE_ADDR_DEST_MASK GENMASK(5, 0)
#define KVASER_USB_HYDRA_HE_ADDR_SRC_BITS 2
static inline u16 kvaser_usb_hydra_get_cmd_transid(const struct kvaser_cmd *cmd)
{
return le16_to_cpu(cmd->header.transid) & KVASER_USB_HYDRA_TRANSID_MASK;
}
static inline void kvaser_usb_hydra_set_cmd_transid(struct kvaser_cmd *cmd,
u16 transid)
{
cmd->header.transid =
cpu_to_le16(transid & KVASER_USB_HYDRA_TRANSID_MASK);
}
static inline u8 kvaser_usb_hydra_get_cmd_src_he(const struct kvaser_cmd *cmd)
{
return (cmd->header.he_addr & KVASER_USB_HYDRA_HE_ADDR_SRC_MASK) >>
KVASER_USB_HYDRA_HE_ADDR_SRC_BITS |
le16_to_cpu(cmd->header.transid) >>
KVASER_USB_HYDRA_TRANSID_BITS;
}
static inline void kvaser_usb_hydra_set_cmd_dest_he(struct kvaser_cmd *cmd,
u8 dest_he)
{
cmd->header.he_addr =
(cmd->header.he_addr & KVASER_USB_HYDRA_HE_ADDR_SRC_MASK) |
(dest_he & KVASER_USB_HYDRA_HE_ADDR_DEST_MASK);
}
static u8 kvaser_usb_hydra_channel_from_cmd(const struct kvaser_usb *dev,
const struct kvaser_cmd *cmd)
{
int i;
u8 channel = 0xff;
u8 src_he = kvaser_usb_hydra_get_cmd_src_he(cmd);
for (i = 0; i < KVASER_USB_MAX_NET_DEVICES; i++) {
if (dev->card_data.hydra.channel_to_he[i] == src_he) {
channel = i;
break;
}
}
return channel;
}
static u16 kvaser_usb_hydra_get_next_transid(struct kvaser_usb *dev)
{
unsigned long flags;
u16 transid;
struct kvaser_usb_dev_card_data_hydra *card_data =
&dev->card_data.hydra;
spin_lock_irqsave(&card_data->transid_lock, flags);
transid = card_data->transid;
if (transid >= KVASER_USB_HYDRA_MAX_TRANSID)
transid = KVASER_USB_HYDRA_MIN_TRANSID;
else
transid++;
card_data->transid = transid;
spin_unlock_irqrestore(&card_data->transid_lock, flags);
return transid;
}
static size_t kvaser_usb_hydra_cmd_size(struct kvaser_cmd *cmd)
{
size_t ret;
if (cmd->header.cmd_no == CMD_EXTENDED)
ret = le16_to_cpu(((struct kvaser_cmd_ext *)cmd)->len);
else
ret = sizeof(struct kvaser_cmd);
return ret;
}
static struct kvaser_usb_net_priv *
kvaser_usb_hydra_net_priv_from_cmd(const struct kvaser_usb *dev,
const struct kvaser_cmd *cmd)
{
struct kvaser_usb_net_priv *priv = NULL;
u8 channel = kvaser_usb_hydra_channel_from_cmd(dev, cmd);
if (channel >= dev->nchannels)
dev_err(&dev->intf->dev,
"Invalid channel number (%d)\n", channel);
else
priv = dev->nets[channel];
return priv;
}
static ktime_t
kvaser_usb_hydra_ktime_from_rx_cmd(const struct kvaser_usb_dev_cfg *cfg,
const struct kvaser_cmd *cmd)
{
u64 ticks;
if (cmd->header.cmd_no == CMD_EXTENDED) {
struct kvaser_cmd_ext *cmd_ext = (struct kvaser_cmd_ext *)cmd;
ticks = le64_to_cpu(cmd_ext->rx_can.timestamp);
} else {
ticks = le16_to_cpu(cmd->rx_can.timestamp[0]);
ticks += (u64)(le16_to_cpu(cmd->rx_can.timestamp[1])) << 16;
ticks += (u64)(le16_to_cpu(cmd->rx_can.timestamp[2])) << 32;
}
return ns_to_ktime(div_u64(ticks * 1000, cfg->timestamp_freq));
}
static int kvaser_usb_hydra_send_simple_cmd(struct kvaser_usb *dev,
u8 cmd_no, int channel)
{
struct kvaser_cmd *cmd;
int err;
cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
if (!cmd)
return -ENOMEM;
cmd->header.cmd_no = cmd_no;
if (channel < 0) {
kvaser_usb_hydra_set_cmd_dest_he
(cmd, KVASER_USB_HYDRA_HE_ADDRESS_ILLEGAL);
} else {
if (channel >= KVASER_USB_MAX_NET_DEVICES) {
dev_err(&dev->intf->dev, "channel (%d) out of range.\n",
channel);
err = -EINVAL;
goto end;
}
kvaser_usb_hydra_set_cmd_dest_he
(cmd, dev->card_data.hydra.channel_to_he[channel]);
}
kvaser_usb_hydra_set_cmd_transid
(cmd, kvaser_usb_hydra_get_next_transid(dev));
err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
if (err)
goto end;
end:
kfree(cmd);
return err;
}
static int
kvaser_usb_hydra_send_simple_cmd_async(struct kvaser_usb_net_priv *priv,
u8 cmd_no)
{
struct kvaser_cmd *cmd;
struct kvaser_usb *dev = priv->dev;
int err;
cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_ATOMIC);
if (!cmd)
return -ENOMEM;
cmd->header.cmd_no = cmd_no;
kvaser_usb_hydra_set_cmd_dest_he
(cmd, dev->card_data.hydra.channel_to_he[priv->channel]);
kvaser_usb_hydra_set_cmd_transid
(cmd, kvaser_usb_hydra_get_next_transid(dev));
err = kvaser_usb_send_cmd_async(priv, cmd,
kvaser_usb_hydra_cmd_size(cmd));
if (err)
kfree(cmd);
return err;
}
/* This function is used for synchronously waiting on hydra control commands.
* Note: Compared to kvaser_usb_hydra_read_bulk_callback(), we never need to
* handle partial hydra commands. Since hydra control commands are always
* non-extended commands.
*/
static int kvaser_usb_hydra_wait_cmd(const struct kvaser_usb *dev, u8 cmd_no,
struct kvaser_cmd *cmd)
{
void *buf;
int err;
unsigned long timeout = jiffies + msecs_to_jiffies(KVASER_USB_TIMEOUT);
if (cmd->header.cmd_no == CMD_EXTENDED) {
dev_err(&dev->intf->dev, "Wait for CMD_EXTENDED not allowed\n");
return -EINVAL;
}
buf = kzalloc(KVASER_USB_RX_BUFFER_SIZE, GFP_KERNEL);
if (!buf)
return -ENOMEM;
do {
int actual_len = 0;
int pos = 0;
err = kvaser_usb_recv_cmd(dev, buf, KVASER_USB_RX_BUFFER_SIZE,
&actual_len);
if (err < 0)
goto end;
while (pos < actual_len) {
struct kvaser_cmd *tmp_cmd;
size_t cmd_len;
tmp_cmd = buf + pos;
cmd_len = kvaser_usb_hydra_cmd_size(tmp_cmd);
if (pos + cmd_len > actual_len) {
dev_err_ratelimited(&dev->intf->dev,
"Format error\n");
break;
}
if (tmp_cmd->header.cmd_no == cmd_no) {
memcpy(cmd, tmp_cmd, cmd_len);
goto end;
}
pos += cmd_len;
}
} while (time_before(jiffies, timeout));
err = -EINVAL;
end:
kfree(buf);
return err;
}
static int kvaser_usb_hydra_map_channel_resp(struct kvaser_usb *dev,
const struct kvaser_cmd *cmd)
{
u8 he, channel;
u16 transid = kvaser_usb_hydra_get_cmd_transid(cmd);
struct kvaser_usb_dev_card_data_hydra *card_data =
&dev->card_data.hydra;
if (transid > 0x007f || transid < 0x0040) {
dev_err(&dev->intf->dev,
"CMD_MAP_CHANNEL_RESP, invalid transid: 0x%x\n",
transid);
return -EINVAL;
}
switch (transid) {
case KVASER_USB_HYDRA_TRANSID_CANHE:
case KVASER_USB_HYDRA_TRANSID_CANHE + 1:
case KVASER_USB_HYDRA_TRANSID_CANHE + 2:
case KVASER_USB_HYDRA_TRANSID_CANHE + 3:
case KVASER_USB_HYDRA_TRANSID_CANHE + 4:
channel = transid & 0x000f;
he = cmd->map_ch_res.he_addr;
card_data->channel_to_he[channel] = he;
break;
case KVASER_USB_HYDRA_TRANSID_SYSDBG:
card_data->sysdbg_he = cmd->map_ch_res.he_addr;
break;
default:
dev_warn(&dev->intf->dev,
"Unknown CMD_MAP_CHANNEL_RESP transid=0x%x\n",
transid);
break;
}
return 0;
}
static int kvaser_usb_hydra_map_channel(struct kvaser_usb *dev, u16 transid,
u8 channel, const char *name)
{
struct kvaser_cmd *cmd;
int err;
cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
if (!cmd)
return -ENOMEM;
strcpy(cmd->map_ch_req.name, name);
cmd->header.cmd_no = CMD_MAP_CHANNEL_REQ;
kvaser_usb_hydra_set_cmd_dest_he
(cmd, KVASER_USB_HYDRA_HE_ADDRESS_ROUTER);
cmd->map_ch_req.channel = channel;
kvaser_usb_hydra_set_cmd_transid(cmd, transid);
err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
if (err)
goto end;
err = kvaser_usb_hydra_wait_cmd(dev, CMD_MAP_CHANNEL_RESP, cmd);
if (err)
goto end;
err = kvaser_usb_hydra_map_channel_resp(dev, cmd);
if (err)
goto end;
end:
kfree(cmd);
return err;
}
static int kvaser_usb_hydra_get_single_capability(struct kvaser_usb *dev,
u16 cap_cmd_req, u16 *status)
{
struct kvaser_usb_dev_card_data *card_data = &dev->card_data;
struct kvaser_cmd *cmd;
u32 value = 0;
u32 mask = 0;
u16 cap_cmd_res;
int err;
int i;
cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
if (!cmd)
return -ENOMEM;
cmd->header.cmd_no = CMD_GET_CAPABILITIES_REQ;
cmd->cap_req.cap_cmd = cpu_to_le16(cap_cmd_req);
kvaser_usb_hydra_set_cmd_dest_he(cmd, card_data->hydra.sysdbg_he);
kvaser_usb_hydra_set_cmd_transid
(cmd, kvaser_usb_hydra_get_next_transid(dev));
err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
if (err)
goto end;
err = kvaser_usb_hydra_wait_cmd(dev, CMD_GET_CAPABILITIES_RESP, cmd);
if (err)
goto end;
*status = le16_to_cpu(cmd->cap_res.status);
if (*status != KVASER_USB_HYDRA_CAP_STAT_OK)
goto end;
cap_cmd_res = le16_to_cpu(cmd->cap_res.cap_cmd);
switch (cap_cmd_res) {
case KVASER_USB_HYDRA_CAP_CMD_LISTEN_MODE:
case KVASER_USB_HYDRA_CAP_CMD_ERR_REPORT:
case KVASER_USB_HYDRA_CAP_CMD_ONE_SHOT:
value = le32_to_cpu(cmd->cap_res.value);
mask = le32_to_cpu(cmd->cap_res.mask);
break;
default:
dev_warn(&dev->intf->dev, "Unknown capability command %u\n",
cap_cmd_res);
break;
}
for (i = 0; i < dev->nchannels; i++) {
if (BIT(i) & (value & mask)) {
switch (cap_cmd_res) {
case KVASER_USB_HYDRA_CAP_CMD_LISTEN_MODE:
card_data->ctrlmode_supported |=
CAN_CTRLMODE_LISTENONLY;
break;
case KVASER_USB_HYDRA_CAP_CMD_ERR_REPORT:
card_data->capabilities |=
KVASER_USB_CAP_BERR_CAP;
break;
case KVASER_USB_HYDRA_CAP_CMD_ONE_SHOT:
card_data->ctrlmode_supported |=
CAN_CTRLMODE_ONE_SHOT;
break;
}
}
}
end:
kfree(cmd);
return err;
}
static void kvaser_usb_hydra_start_chip_reply(const struct kvaser_usb *dev,
const struct kvaser_cmd *cmd)
{
struct kvaser_usb_net_priv *priv;
priv = kvaser_usb_hydra_net_priv_from_cmd(dev, cmd);
if (!priv)
return;
if (completion_done(&priv->start_comp) &&
netif_queue_stopped(priv->netdev)) {
netif_wake_queue(priv->netdev);
} else {
netif_start_queue(priv->netdev);
complete(&priv->start_comp);
}
}
static void kvaser_usb_hydra_stop_chip_reply(const struct kvaser_usb *dev,
const struct kvaser_cmd *cmd)
{
struct kvaser_usb_net_priv *priv;
priv = kvaser_usb_hydra_net_priv_from_cmd(dev, cmd);
if (!priv)
return;
complete(&priv->stop_comp);
}
static void kvaser_usb_hydra_flush_queue_reply(const struct kvaser_usb *dev,
const struct kvaser_cmd *cmd)
{
struct kvaser_usb_net_priv *priv;
priv = kvaser_usb_hydra_net_priv_from_cmd(dev, cmd);
if (!priv)
return;
complete(&priv->flush_comp);
}
static void
kvaser_usb_hydra_bus_status_to_can_state(const struct kvaser_usb_net_priv *priv,
u8 bus_status,
const struct can_berr_counter *bec,
enum can_state *new_state)
{
if (bus_status & KVASER_USB_HYDRA_BUS_BUS_OFF) {
*new_state = CAN_STATE_BUS_OFF;
} else if (bus_status & KVASER_USB_HYDRA_BUS_ERR_PASS) {
*new_state = CAN_STATE_ERROR_PASSIVE;
} else if (bus_status == KVASER_USB_HYDRA_BUS_ERR_ACT) {
if (bec->txerr >= 128 || bec->rxerr >= 128) {
netdev_warn(priv->netdev,
"ERR_ACTIVE but err tx=%u or rx=%u >=128\n",
bec->txerr, bec->rxerr);
*new_state = CAN_STATE_ERROR_PASSIVE;
} else if (bec->txerr >= 96 || bec->rxerr >= 96) {
*new_state = CAN_STATE_ERROR_WARNING;
} else {
*new_state = CAN_STATE_ERROR_ACTIVE;
}
}
}
static void kvaser_usb_hydra_update_state(struct kvaser_usb_net_priv *priv,
u8 bus_status,
const struct can_berr_counter *bec)
{
struct net_device *netdev = priv->netdev;
struct can_frame *cf;
struct sk_buff *skb;
struct net_device_stats *stats;
enum can_state new_state, old_state;
old_state = priv->can.state;
kvaser_usb_hydra_bus_status_to_can_state(priv, bus_status, bec,
&new_state);
if (new_state == old_state)
return;
/* Ignore state change if previous state was STOPPED and the new state
* is BUS_OFF. Firmware always report this as BUS_OFF, since firmware
* does not distinguish between BUS_OFF and STOPPED.
*/
if (old_state == CAN_STATE_STOPPED && new_state == CAN_STATE_BUS_OFF)
return;
skb = alloc_can_err_skb(netdev, &cf);
if (skb) {
enum can_state tx_state, rx_state;
tx_state = (bec->txerr >= bec->rxerr) ?
new_state : CAN_STATE_ERROR_ACTIVE;
rx_state = (bec->txerr <= bec->rxerr) ?
new_state : CAN_STATE_ERROR_ACTIVE;
can_change_state(netdev, cf, tx_state, rx_state);
}
if (new_state == CAN_STATE_BUS_OFF && old_state < CAN_STATE_BUS_OFF) {
if (!priv->can.restart_ms)
kvaser_usb_hydra_send_simple_cmd_async
(priv, CMD_STOP_CHIP_REQ);
can_bus_off(netdev);
}
if (!skb) {
netdev_warn(netdev, "No memory left for err_skb\n");
return;
}
if (priv->can.restart_ms &&
old_state >= CAN_STATE_BUS_OFF &&
new_state < CAN_STATE_BUS_OFF)
priv->can.can_stats.restarts++;
cf->data[6] = bec->txerr;
cf->data[7] = bec->rxerr;
stats = &netdev->stats;
stats->rx_packets++;
stats->rx_bytes += cf->can_dlc;
netif_rx(skb);
}
static void kvaser_usb_hydra_state_event(const struct kvaser_usb *dev,
const struct kvaser_cmd *cmd)
{
struct kvaser_usb_net_priv *priv;
struct can_berr_counter bec;
u8 bus_status;
priv = kvaser_usb_hydra_net_priv_from_cmd(dev, cmd);
if (!priv)
return;
bus_status = cmd->chip_state_event.bus_status;
bec.txerr = cmd->chip_state_event.tx_err_counter;
bec.rxerr = cmd->chip_state_event.rx_err_counter;
kvaser_usb_hydra_update_state(priv, bus_status, &bec);
priv->bec.txerr = bec.txerr;
priv->bec.rxerr = bec.rxerr;
}
static void kvaser_usb_hydra_error_event_parameter(const struct kvaser_usb *dev,
const struct kvaser_cmd *cmd)
{
/* info1 will contain the offending cmd_no */
switch (le16_to_cpu(cmd->error_event.info1)) {
case CMD_START_CHIP_REQ:
dev_warn(&dev->intf->dev,
"CMD_START_CHIP_REQ error in parameter\n");
break;
case CMD_STOP_CHIP_REQ:
dev_warn(&dev->intf->dev,
"CMD_STOP_CHIP_REQ error in parameter\n");
break;
case CMD_FLUSH_QUEUE:
dev_warn(&dev->intf->dev,
"CMD_FLUSH_QUEUE error in parameter\n");
break;
case CMD_SET_BUSPARAMS_REQ:
dev_warn(&dev->intf->dev,
"Set bittiming failed. Error in parameter\n");
break;
case CMD_SET_BUSPARAMS_FD_REQ:
dev_warn(&dev->intf->dev,
"Set data bittiming failed. Error in parameter\n");
break;
default:
dev_warn(&dev->intf->dev,
"Unhandled parameter error event cmd_no (%u)\n",
le16_to_cpu(cmd->error_event.info1));
break;
}
}
static void kvaser_usb_hydra_error_event(const struct kvaser_usb *dev,
const struct kvaser_cmd *cmd)
{
switch (cmd->error_event.error_code) {
case KVASER_USB_HYDRA_ERROR_EVENT_PARAM:
kvaser_usb_hydra_error_event_parameter(dev, cmd);
break;
case KVASER_USB_HYDRA_ERROR_EVENT_CAN:
/* Wrong channel mapping?! This should never happen!
* info1 will contain the offending cmd_no
*/
dev_err(&dev->intf->dev,
"Received CAN error event for cmd_no (%u)\n",
le16_to_cpu(cmd->error_event.info1));
break;
default:
dev_warn(&dev->intf->dev,
"Unhandled error event (%d)\n",
cmd->error_event.error_code);
break;
}
}
static void
kvaser_usb_hydra_error_frame(struct kvaser_usb_net_priv *priv,
const struct kvaser_err_frame_data *err_frame_data,
ktime_t hwtstamp)
{
struct net_device *netdev = priv->netdev;
struct net_device_stats *stats = &netdev->stats;
struct can_frame *cf;
struct sk_buff *skb;
struct skb_shared_hwtstamps *shhwtstamps;
struct can_berr_counter bec;
enum can_state new_state, old_state;
u8 bus_status;
priv->can.can_stats.bus_error++;
stats->rx_errors++;
bus_status = err_frame_data->bus_status;
bec.txerr = err_frame_data->tx_err_counter;
bec.rxerr = err_frame_data->rx_err_counter;
old_state = priv->can.state;
kvaser_usb_hydra_bus_status_to_can_state(priv, bus_status, &bec,
&new_state);
skb = alloc_can_err_skb(netdev, &cf);
if (new_state != old_state) {
if (skb) {
enum can_state tx_state, rx_state;
tx_state = (bec.txerr >= bec.rxerr) ?
new_state : CAN_STATE_ERROR_ACTIVE;
rx_state = (bec.txerr <= bec.rxerr) ?
new_state : CAN_STATE_ERROR_ACTIVE;
can_change_state(netdev, cf, tx_state, rx_state);
}
if (new_state == CAN_STATE_BUS_OFF) {
if (!priv->can.restart_ms)
kvaser_usb_hydra_send_simple_cmd_async
(priv, CMD_STOP_CHIP_REQ);
can_bus_off(netdev);
}
if (priv->can.restart_ms &&
old_state >= CAN_STATE_BUS_OFF &&
new_state < CAN_STATE_BUS_OFF)
cf->can_id |= CAN_ERR_RESTARTED;
}
if (!skb) {
stats->rx_dropped++;
netdev_warn(netdev, "No memory left for err_skb\n");
return;
}
shhwtstamps = skb_hwtstamps(skb);
shhwtstamps->hwtstamp = hwtstamp;
cf->can_id |= CAN_ERR_BUSERROR;
cf->data[6] = bec.txerr;
cf->data[7] = bec.rxerr;
stats->rx_packets++;
stats->rx_bytes += cf->can_dlc;
netif_rx(skb);
priv->bec.txerr = bec.txerr;
priv->bec.rxerr = bec.rxerr;
}
static void kvaser_usb_hydra_one_shot_fail(struct kvaser_usb_net_priv *priv,
const struct kvaser_cmd_ext *cmd)
{
struct net_device *netdev = priv->netdev;
struct net_device_stats *stats = &netdev->stats;
struct can_frame *cf;
struct sk_buff *skb;
u32 flags;
skb = alloc_can_err_skb(netdev, &cf);
if (!skb) {
stats->rx_dropped++;
netdev_warn(netdev, "No memory left for err_skb\n");
return;
}
cf->can_id |= CAN_ERR_BUSERROR;
flags = le32_to_cpu(cmd->tx_ack.flags);
if (flags & KVASER_USB_HYDRA_CF_FLAG_OSM_NACK)
cf->can_id |= CAN_ERR_ACK;
if (flags & KVASER_USB_HYDRA_CF_FLAG_ABL) {
cf->can_id |= CAN_ERR_LOSTARB;
priv->can.can_stats.arbitration_lost++;
}
stats->tx_errors++;
stats->rx_packets++;
stats->rx_bytes += cf->can_dlc;
netif_rx(skb);
}
static void kvaser_usb_hydra_tx_acknowledge(const struct kvaser_usb *dev,
const struct kvaser_cmd *cmd)
{
struct kvaser_usb_tx_urb_context *context;
struct kvaser_usb_net_priv *priv;
unsigned long irq_flags;
bool one_shot_fail = false;
u16 transid = kvaser_usb_hydra_get_cmd_transid(cmd);
priv = kvaser_usb_hydra_net_priv_from_cmd(dev, cmd);
if (!priv)
return;
if (!netif_device_present(priv->netdev))
return;
if (cmd->header.cmd_no == CMD_EXTENDED) {
struct kvaser_cmd_ext *cmd_ext = (struct kvaser_cmd_ext *)cmd;
u32 flags = le32_to_cpu(cmd_ext->tx_ack.flags);
if (flags & (KVASER_USB_HYDRA_CF_FLAG_OSM_NACK |
KVASER_USB_HYDRA_CF_FLAG_ABL)) {
kvaser_usb_hydra_one_shot_fail(priv, cmd_ext);
one_shot_fail = true;
}
}
context = &priv->tx_contexts[transid % dev->max_tx_urbs];
if (!one_shot_fail) {
struct net_device_stats *stats = &priv->netdev->stats;
stats->tx_packets++;
stats->tx_bytes += can_dlc2len(context->dlc);
}
spin_lock_irqsave(&priv->tx_contexts_lock, irq_flags);
can_get_echo_skb(priv->netdev, context->echo_index);
context->echo_index = dev->max_tx_urbs;
--priv->active_tx_contexts;
netif_wake_queue(priv->netdev);
spin_unlock_irqrestore(&priv->tx_contexts_lock, irq_flags);
}
static void kvaser_usb_hydra_rx_msg_std(const struct kvaser_usb *dev,
const struct kvaser_cmd *cmd)
{
struct kvaser_usb_net_priv *priv = NULL;
struct can_frame *cf;
struct sk_buff *skb;
struct skb_shared_hwtstamps *shhwtstamps;
struct net_device_stats *stats;
u8 flags;
ktime_t hwtstamp;
priv = kvaser_usb_hydra_net_priv_from_cmd(dev, cmd);
if (!priv)
return;
stats = &priv->netdev->stats;
flags = cmd->rx_can.flags;
hwtstamp = kvaser_usb_hydra_ktime_from_rx_cmd(dev->cfg, cmd);
if (flags & KVASER_USB_HYDRA_CF_FLAG_ERROR_FRAME) {
kvaser_usb_hydra_error_frame(priv, &cmd->rx_can.err_frame_data,
hwtstamp);
return;
}
skb = alloc_can_skb(priv->netdev, &cf);
if (!skb) {
stats->rx_dropped++;
return;
}
shhwtstamps = skb_hwtstamps(skb);
shhwtstamps->hwtstamp = hwtstamp;
cf->can_id = le32_to_cpu(cmd->rx_can.id);
if (cf->can_id & KVASER_USB_HYDRA_EXTENDED_FRAME_ID) {
cf->can_id &= CAN_EFF_MASK;
cf->can_id |= CAN_EFF_FLAG;
} else {
cf->can_id &= CAN_SFF_MASK;
}
if (flags & KVASER_USB_HYDRA_CF_FLAG_OVERRUN)
kvaser_usb_can_rx_over_error(priv->netdev);
cf->can_dlc = get_can_dlc(cmd->rx_can.dlc);
if (flags & KVASER_USB_HYDRA_CF_FLAG_REMOTE_FRAME)
cf->can_id |= CAN_RTR_FLAG;
else
memcpy(cf->data, cmd->rx_can.data, cf->can_dlc);
stats->rx_packets++;
stats->rx_bytes += cf->can_dlc;
netif_rx(skb);
}
static void kvaser_usb_hydra_rx_msg_ext(const struct kvaser_usb *dev,
const struct kvaser_cmd_ext *cmd)
{
struct kvaser_cmd *std_cmd = (struct kvaser_cmd *)cmd;
struct kvaser_usb_net_priv *priv;
struct canfd_frame *cf;
struct sk_buff *skb;
struct skb_shared_hwtstamps *shhwtstamps;
struct net_device_stats *stats;
u32 flags;
u8 dlc;
u32 kcan_header;
ktime_t hwtstamp;
priv = kvaser_usb_hydra_net_priv_from_cmd(dev, std_cmd);
if (!priv)
return;
stats = &priv->netdev->stats;
kcan_header = le32_to_cpu(cmd->rx_can.kcan_header);
dlc = (kcan_header & KVASER_USB_KCAN_DATA_DLC_MASK) >>
KVASER_USB_KCAN_DATA_DLC_SHIFT;
flags = le32_to_cpu(cmd->rx_can.flags);
hwtstamp = kvaser_usb_hydra_ktime_from_rx_cmd(dev->cfg, std_cmd);
if (flags & KVASER_USB_HYDRA_CF_FLAG_ERROR_FRAME) {
kvaser_usb_hydra_error_frame(priv, &cmd->rx_can.err_frame_data,
hwtstamp);
return;
}
if (flags & KVASER_USB_HYDRA_CF_FLAG_FDF)
skb = alloc_canfd_skb(priv->netdev, &cf);
else
skb = alloc_can_skb(priv->netdev, (struct can_frame **)&cf);
if (!skb) {
stats->rx_dropped++;
return;
}
shhwtstamps = skb_hwtstamps(skb);
shhwtstamps->hwtstamp = hwtstamp;
cf->can_id = le32_to_cpu(cmd->rx_can.id);
if (flags & KVASER_USB_HYDRA_CF_FLAG_EXTENDED_ID) {
cf->can_id &= CAN_EFF_MASK;
cf->can_id |= CAN_EFF_FLAG;
} else {
cf->can_id &= CAN_SFF_MASK;
}
if (flags & KVASER_USB_HYDRA_CF_FLAG_OVERRUN)
kvaser_usb_can_rx_over_error(priv->netdev);
if (flags & KVASER_USB_HYDRA_CF_FLAG_FDF) {
cf->len = can_dlc2len(get_canfd_dlc(dlc));
if (flags & KVASER_USB_HYDRA_CF_FLAG_BRS)
cf->flags |= CANFD_BRS;
if (flags & KVASER_USB_HYDRA_CF_FLAG_ESI)
cf->flags |= CANFD_ESI;
} else {
cf->len = get_can_dlc(dlc);
}
if (flags & KVASER_USB_HYDRA_CF_FLAG_REMOTE_FRAME)
cf->can_id |= CAN_RTR_FLAG;
else
memcpy(cf->data, cmd->rx_can.kcan_payload, cf->len);
stats->rx_packets++;
stats->rx_bytes += cf->len;
netif_rx(skb);
}
static void kvaser_usb_hydra_handle_cmd_std(const struct kvaser_usb *dev,
const struct kvaser_cmd *cmd)
{
switch (cmd->header.cmd_no) {
case CMD_START_CHIP_RESP:
kvaser_usb_hydra_start_chip_reply(dev, cmd);
break;
case CMD_STOP_CHIP_RESP:
kvaser_usb_hydra_stop_chip_reply(dev, cmd);
break;
case CMD_FLUSH_QUEUE_RESP:
kvaser_usb_hydra_flush_queue_reply(dev, cmd);
break;
case CMD_CHIP_STATE_EVENT:
kvaser_usb_hydra_state_event(dev, cmd);
break;
case CMD_ERROR_EVENT:
kvaser_usb_hydra_error_event(dev, cmd);
break;
case CMD_TX_ACKNOWLEDGE:
kvaser_usb_hydra_tx_acknowledge(dev, cmd);
break;
case CMD_RX_MESSAGE:
kvaser_usb_hydra_rx_msg_std(dev, cmd);
break;
/* Ignored commands */
case CMD_SET_BUSPARAMS_RESP:
case CMD_SET_BUSPARAMS_FD_RESP:
break;
default:
dev_warn(&dev->intf->dev, "Unhandled command (%d)\n",
cmd->header.cmd_no);
break;
}
}
static void kvaser_usb_hydra_handle_cmd_ext(const struct kvaser_usb *dev,
const struct kvaser_cmd_ext *cmd)
{
switch (cmd->cmd_no_ext) {
case CMD_TX_ACKNOWLEDGE_FD:
kvaser_usb_hydra_tx_acknowledge(dev, (struct kvaser_cmd *)cmd);
break;
case CMD_RX_MESSAGE_FD:
kvaser_usb_hydra_rx_msg_ext(dev, cmd);
break;
default:
dev_warn(&dev->intf->dev, "Unhandled extended command (%d)\n",
cmd->header.cmd_no);
break;
}
}
static void kvaser_usb_hydra_handle_cmd(const struct kvaser_usb *dev,
const struct kvaser_cmd *cmd)
{
if (cmd->header.cmd_no == CMD_EXTENDED)
kvaser_usb_hydra_handle_cmd_ext
(dev, (struct kvaser_cmd_ext *)cmd);
else
kvaser_usb_hydra_handle_cmd_std(dev, cmd);
}
static void *
kvaser_usb_hydra_frame_to_cmd_ext(const struct kvaser_usb_net_priv *priv,
const struct sk_buff *skb, int *frame_len,
int *cmd_len, u16 transid)
{
struct kvaser_usb *dev = priv->dev;
struct kvaser_cmd_ext *cmd;
struct canfd_frame *cf = (struct canfd_frame *)skb->data;
u8 dlc = can_len2dlc(cf->len);
u8 nbr_of_bytes = cf->len;
u32 flags;
u32 id;
u32 kcan_id;
u32 kcan_header;
*frame_len = nbr_of_bytes;
cmd = kcalloc(1, sizeof(struct kvaser_cmd_ext), GFP_ATOMIC);
if (!cmd)
return NULL;
kvaser_usb_hydra_set_cmd_dest_he
((struct kvaser_cmd *)cmd,
dev->card_data.hydra.channel_to_he[priv->channel]);
kvaser_usb_hydra_set_cmd_transid((struct kvaser_cmd *)cmd, transid);
cmd->header.cmd_no = CMD_EXTENDED;
cmd->cmd_no_ext = CMD_TX_CAN_MESSAGE_FD;
*cmd_len = ALIGN(sizeof(struct kvaser_cmd_ext) -
sizeof(cmd->tx_can.kcan_payload) + nbr_of_bytes,
8);
cmd->len = cpu_to_le16(*cmd_len);
cmd->tx_can.databytes = nbr_of_bytes;
cmd->tx_can.dlc = dlc;
if (cf->can_id & CAN_EFF_FLAG) {
id = cf->can_id & CAN_EFF_MASK;
flags = KVASER_USB_HYDRA_CF_FLAG_EXTENDED_ID;
kcan_id = (cf->can_id & CAN_EFF_MASK) |
KVASER_USB_KCAN_DATA_IDE | KVASER_USB_KCAN_DATA_SRR;
} else {
id = cf->can_id & CAN_SFF_MASK;
flags = 0;
kcan_id = cf->can_id & CAN_SFF_MASK;
}
if (cf->can_id & CAN_ERR_FLAG)
flags |= KVASER_USB_HYDRA_CF_FLAG_ERROR_FRAME;
kcan_header = ((dlc << KVASER_USB_KCAN_DATA_DLC_SHIFT) &
KVASER_USB_KCAN_DATA_DLC_MASK) |
KVASER_USB_KCAN_DATA_AREQ |
(priv->can.ctrlmode & CAN_CTRLMODE_ONE_SHOT ?
KVASER_USB_KCAN_DATA_OSM : 0);
if (can_is_canfd_skb(skb)) {
kcan_header |= KVASER_USB_KCAN_DATA_FDF |
(cf->flags & CANFD_BRS ?
KVASER_USB_KCAN_DATA_BRS : 0);
} else {
if (cf->can_id & CAN_RTR_FLAG) {
kcan_id |= KVASER_USB_KCAN_DATA_RTR;
cmd->tx_can.databytes = 0;
flags |= KVASER_USB_HYDRA_CF_FLAG_REMOTE_FRAME;
}
}
cmd->tx_can.kcan_id = cpu_to_le32(kcan_id);
cmd->tx_can.id = cpu_to_le32(id);
cmd->tx_can.flags = cpu_to_le32(flags);
cmd->tx_can.kcan_header = cpu_to_le32(kcan_header);
memcpy(cmd->tx_can.kcan_payload, cf->data, nbr_of_bytes);
return cmd;
}
static void *
kvaser_usb_hydra_frame_to_cmd_std(const struct kvaser_usb_net_priv *priv,
const struct sk_buff *skb, int *frame_len,
int *cmd_len, u16 transid)
{
struct kvaser_usb *dev = priv->dev;
struct kvaser_cmd *cmd;
struct can_frame *cf = (struct can_frame *)skb->data;
u32 flags;
u32 id;
*frame_len = cf->can_dlc;
cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_ATOMIC);
if (!cmd)
return NULL;
kvaser_usb_hydra_set_cmd_dest_he
(cmd, dev->card_data.hydra.channel_to_he[priv->channel]);
kvaser_usb_hydra_set_cmd_transid(cmd, transid);
cmd->header.cmd_no = CMD_TX_CAN_MESSAGE;
*cmd_len = ALIGN(sizeof(struct kvaser_cmd), 8);
if (cf->can_id & CAN_EFF_FLAG) {
id = (cf->can_id & CAN_EFF_MASK);
id |= KVASER_USB_HYDRA_EXTENDED_FRAME_ID;
} else {
id = cf->can_id & CAN_SFF_MASK;
}
cmd->tx_can.dlc = cf->can_dlc;
flags = (cf->can_id & CAN_EFF_FLAG ?
KVASER_USB_HYDRA_CF_FLAG_EXTENDED_ID : 0);
if (cf->can_id & CAN_RTR_FLAG)
flags |= KVASER_USB_HYDRA_CF_FLAG_REMOTE_FRAME;
flags |= (cf->can_id & CAN_ERR_FLAG ?
KVASER_USB_HYDRA_CF_FLAG_ERROR_FRAME : 0);
cmd->tx_can.id = cpu_to_le32(id);
cmd->tx_can.flags = flags;
memcpy(cmd->tx_can.data, cf->data, *frame_len);
return cmd;
}
static int kvaser_usb_hydra_set_mode(struct net_device *netdev,
enum can_mode mode)
{
int err = 0;
switch (mode) {
case CAN_MODE_START:
/* CAN controller automatically recovers from BUS_OFF */
break;
default:
err = -EOPNOTSUPP;
}
return err;
}
static int kvaser_usb_hydra_set_bittiming(struct net_device *netdev)
{
struct kvaser_cmd *cmd;
struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
struct can_bittiming *bt = &priv->can.bittiming;
struct kvaser_usb *dev = priv->dev;
int tseg1 = bt->prop_seg + bt->phase_seg1;
int tseg2 = bt->phase_seg2;
int sjw = bt->sjw;
int err;
cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
if (!cmd)
return -ENOMEM;
cmd->header.cmd_no = CMD_SET_BUSPARAMS_REQ;
cmd->set_busparams_req.bitrate = cpu_to_le32(bt->bitrate);
cmd->set_busparams_req.sjw = (u8)sjw;
cmd->set_busparams_req.tseg1 = (u8)tseg1;
cmd->set_busparams_req.tseg2 = (u8)tseg2;
cmd->set_busparams_req.nsamples = 1;
kvaser_usb_hydra_set_cmd_dest_he
(cmd, dev->card_data.hydra.channel_to_he[priv->channel]);
kvaser_usb_hydra_set_cmd_transid
(cmd, kvaser_usb_hydra_get_next_transid(dev));
err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
kfree(cmd);
return err;
}
static int kvaser_usb_hydra_set_data_bittiming(struct net_device *netdev)
{
struct kvaser_cmd *cmd;
struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
struct can_bittiming *dbt = &priv->can.data_bittiming;
struct kvaser_usb *dev = priv->dev;
int tseg1 = dbt->prop_seg + dbt->phase_seg1;
int tseg2 = dbt->phase_seg2;
int sjw = dbt->sjw;
int err;
cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
if (!cmd)
return -ENOMEM;
cmd->header.cmd_no = CMD_SET_BUSPARAMS_FD_REQ;
cmd->set_busparams_req.bitrate_d = cpu_to_le32(dbt->bitrate);
cmd->set_busparams_req.sjw_d = (u8)sjw;
cmd->set_busparams_req.tseg1_d = (u8)tseg1;
cmd->set_busparams_req.tseg2_d = (u8)tseg2;
cmd->set_busparams_req.nsamples_d = 1;
if (priv->can.ctrlmode & CAN_CTRLMODE_FD) {
if (priv->can.ctrlmode & CAN_CTRLMODE_FD_NON_ISO)
cmd->set_busparams_req.canfd_mode =
KVASER_USB_HYDRA_BUS_MODE_NONISO;
else
cmd->set_busparams_req.canfd_mode =
KVASER_USB_HYDRA_BUS_MODE_CANFD_ISO;
}
kvaser_usb_hydra_set_cmd_dest_he
(cmd, dev->card_data.hydra.channel_to_he[priv->channel]);
kvaser_usb_hydra_set_cmd_transid
(cmd, kvaser_usb_hydra_get_next_transid(dev));
err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
kfree(cmd);
return err;
}
static int kvaser_usb_hydra_get_berr_counter(const struct net_device *netdev,
struct can_berr_counter *bec)
{
struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
int err;
err = kvaser_usb_hydra_send_simple_cmd(priv->dev,
CMD_GET_CHIP_STATE_REQ,
priv->channel);
if (err)
return err;
*bec = priv->bec;
return 0;
}
static int kvaser_usb_hydra_setup_endpoints(struct kvaser_usb *dev)
{
const struct usb_host_interface *iface_desc;
struct usb_endpoint_descriptor *ep;
int i;
iface_desc = &dev->intf->altsetting[0];
for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) {
ep = &iface_desc->endpoint[i].desc;
if (!dev->bulk_in && usb_endpoint_is_bulk_in(ep) &&
ep->bEndpointAddress == KVASER_USB_HYDRA_BULK_EP_IN_ADDR)
dev->bulk_in = ep;
if (!dev->bulk_out && usb_endpoint_is_bulk_out(ep) &&
ep->bEndpointAddress == KVASER_USB_HYDRA_BULK_EP_OUT_ADDR)
dev->bulk_out = ep;
if (dev->bulk_in && dev->bulk_out)
return 0;
}
return -ENODEV;
}
static int kvaser_usb_hydra_init_card(struct kvaser_usb *dev)
{
int err;
unsigned int i;
struct kvaser_usb_dev_card_data_hydra *card_data =
&dev->card_data.hydra;
card_data->transid = KVASER_USB_HYDRA_MIN_TRANSID;
spin_lock_init(&card_data->transid_lock);
memset(card_data->usb_rx_leftover, 0, KVASER_USB_HYDRA_MAX_CMD_LEN);
card_data->usb_rx_leftover_len = 0;
spin_lock_init(&card_data->usb_rx_leftover_lock);
memset(card_data->channel_to_he, KVASER_USB_HYDRA_HE_ADDRESS_ILLEGAL,
sizeof(card_data->channel_to_he));
card_data->sysdbg_he = 0;
for (i = 0; i < KVASER_USB_MAX_NET_DEVICES; i++) {
err = kvaser_usb_hydra_map_channel
(dev,
(KVASER_USB_HYDRA_TRANSID_CANHE | i),
i, "CAN");
if (err) {
dev_err(&dev->intf->dev,
"CMD_MAP_CHANNEL_REQ failed for CAN%u\n", i);
return err;
}
}
err = kvaser_usb_hydra_map_channel(dev, KVASER_USB_HYDRA_TRANSID_SYSDBG,
0, "SYSDBG");
if (err) {
dev_err(&dev->intf->dev,
"CMD_MAP_CHANNEL_REQ failed for SYSDBG\n");
return err;
}
return 0;
}
static int kvaser_usb_hydra_get_software_info(struct kvaser_usb *dev)
{
struct kvaser_cmd cmd;
int err;
err = kvaser_usb_hydra_send_simple_cmd(dev, CMD_GET_SOFTWARE_INFO_REQ,
-1);
if (err)
return err;
memset(&cmd, 0, sizeof(struct kvaser_cmd));
err = kvaser_usb_hydra_wait_cmd(dev, CMD_GET_SOFTWARE_INFO_RESP, &cmd);
if (err)
return err;
dev->max_tx_urbs = min_t(unsigned int, KVASER_USB_MAX_TX_URBS,
le16_to_cpu(cmd.sw_info.max_outstanding_tx));
return 0;
}
static int kvaser_usb_hydra_get_software_details(struct kvaser_usb *dev)
{
struct kvaser_cmd *cmd;
int err;
u32 flags;
struct kvaser_usb_dev_card_data *card_data = &dev->card_data;
cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
if (!cmd)
return -ENOMEM;
cmd->header.cmd_no = CMD_GET_SOFTWARE_DETAILS_REQ;
cmd->sw_detail_req.use_ext_cmd = 1;
kvaser_usb_hydra_set_cmd_dest_he
(cmd, KVASER_USB_HYDRA_HE_ADDRESS_ILLEGAL);
kvaser_usb_hydra_set_cmd_transid
(cmd, kvaser_usb_hydra_get_next_transid(dev));
err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
if (err)
goto end;
err = kvaser_usb_hydra_wait_cmd(dev, CMD_GET_SOFTWARE_DETAILS_RESP,
cmd);
if (err)
goto end;
dev->fw_version = le32_to_cpu(cmd->sw_detail_res.sw_version);
flags = le32_to_cpu(cmd->sw_detail_res.sw_flags);
if (flags & KVASER_USB_HYDRA_SW_FLAG_FW_BAD) {
dev_err(&dev->intf->dev,
"Bad firmware, device refuse to run!\n");
err = -EINVAL;
goto end;
}
if (flags & KVASER_USB_HYDRA_SW_FLAG_FW_BETA)
dev_info(&dev->intf->dev, "Beta firmware in use\n");
if (flags & KVASER_USB_HYDRA_SW_FLAG_EXT_CAP)
card_data->capabilities |= KVASER_USB_CAP_EXT_CAP;
if (flags & KVASER_USB_HYDRA_SW_FLAG_EXT_CMD)
card_data->capabilities |= KVASER_USB_HYDRA_CAP_EXT_CMD;
if (flags & KVASER_USB_HYDRA_SW_FLAG_CANFD)
card_data->ctrlmode_supported |= CAN_CTRLMODE_FD;
if (flags & KVASER_USB_HYDRA_SW_FLAG_NONISO)
card_data->ctrlmode_supported |= CAN_CTRLMODE_FD_NON_ISO;
if (flags & KVASER_USB_HYDRA_SW_FLAG_FREQ_80M)
dev->cfg = &kvaser_usb_hydra_dev_cfg_kcan;
else
dev->cfg = &kvaser_usb_hydra_dev_cfg_flexc;
end:
kfree(cmd);
return err;
}
static int kvaser_usb_hydra_get_card_info(struct kvaser_usb *dev)
{
struct kvaser_cmd cmd;
int err;
err = kvaser_usb_hydra_send_simple_cmd(dev, CMD_GET_CARD_INFO_REQ, -1);
if (err)
return err;
memset(&cmd, 0, sizeof(struct kvaser_cmd));
err = kvaser_usb_hydra_wait_cmd(dev, CMD_GET_CARD_INFO_RESP, &cmd);
if (err)
return err;
dev->nchannels = cmd.card_info.nchannels;
if (dev->nchannels > KVASER_USB_MAX_NET_DEVICES)
return -EINVAL;
return 0;
}
static int kvaser_usb_hydra_get_capabilities(struct kvaser_usb *dev)
{
int err;
u16 status;
if (!(dev->card_data.capabilities & KVASER_USB_CAP_EXT_CAP)) {
dev_info(&dev->intf->dev,
"No extended capability support. Upgrade your device.\n");
return 0;
}
err = kvaser_usb_hydra_get_single_capability
(dev,
KVASER_USB_HYDRA_CAP_CMD_LISTEN_MODE,
&status);
if (err)
return err;
if (status)
dev_info(&dev->intf->dev,
"KVASER_USB_HYDRA_CAP_CMD_LISTEN_MODE failed %u\n",
status);
err = kvaser_usb_hydra_get_single_capability
(dev,
KVASER_USB_HYDRA_CAP_CMD_ERR_REPORT,
&status);
if (err)
return err;
if (status)
dev_info(&dev->intf->dev,
"KVASER_USB_HYDRA_CAP_CMD_ERR_REPORT failed %u\n",
status);
err = kvaser_usb_hydra_get_single_capability
(dev, KVASER_USB_HYDRA_CAP_CMD_ONE_SHOT,
&status);
if (err)
return err;
if (status)
dev_info(&dev->intf->dev,
"KVASER_USB_HYDRA_CAP_CMD_ONE_SHOT failed %u\n",
status);
return 0;
}
static int kvaser_usb_hydra_set_opt_mode(const struct kvaser_usb_net_priv *priv)
{
struct kvaser_usb *dev = priv->dev;
struct kvaser_cmd *cmd;
int err;
if ((priv->can.ctrlmode &
(CAN_CTRLMODE_FD | CAN_CTRLMODE_FD_NON_ISO)) ==
CAN_CTRLMODE_FD_NON_ISO) {
netdev_warn(priv->netdev,
"CTRLMODE_FD shall be on if CTRLMODE_FD_NON_ISO is on\n");
return -EINVAL;
}
cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
if (!cmd)
return -ENOMEM;
cmd->header.cmd_no = CMD_SET_DRIVERMODE_REQ;
kvaser_usb_hydra_set_cmd_dest_he
(cmd, dev->card_data.hydra.channel_to_he[priv->channel]);
kvaser_usb_hydra_set_cmd_transid
(cmd, kvaser_usb_hydra_get_next_transid(dev));
if (priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY)
cmd->set_ctrlmode.mode = KVASER_USB_HYDRA_CTRLMODE_LISTEN;
else
cmd->set_ctrlmode.mode = KVASER_USB_HYDRA_CTRLMODE_NORMAL;
err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
kfree(cmd);
return err;
}
static int kvaser_usb_hydra_start_chip(struct kvaser_usb_net_priv *priv)
{
int err;
init_completion(&priv->start_comp);
err = kvaser_usb_hydra_send_simple_cmd(priv->dev, CMD_START_CHIP_REQ,
priv->channel);
if (err)
return err;
if (!wait_for_completion_timeout(&priv->start_comp,
msecs_to_jiffies(KVASER_USB_TIMEOUT)))
return -ETIMEDOUT;
return 0;
}
static int kvaser_usb_hydra_stop_chip(struct kvaser_usb_net_priv *priv)
{
int err;
init_completion(&priv->stop_comp);
/* Make sure we do not report invalid BUS_OFF from CMD_CHIP_STATE_EVENT
* see comment in kvaser_usb_hydra_update_state()
*/
priv->can.state = CAN_STATE_STOPPED;
err = kvaser_usb_hydra_send_simple_cmd(priv->dev, CMD_STOP_CHIP_REQ,
priv->channel);
if (err)
return err;
if (!wait_for_completion_timeout(&priv->stop_comp,
msecs_to_jiffies(KVASER_USB_TIMEOUT)))
return -ETIMEDOUT;
return 0;
}
static int kvaser_usb_hydra_flush_queue(struct kvaser_usb_net_priv *priv)
{
int err;
init_completion(&priv->flush_comp);
err = kvaser_usb_hydra_send_simple_cmd(priv->dev, CMD_FLUSH_QUEUE,
priv->channel);
if (err)
return err;
if (!wait_for_completion_timeout(&priv->flush_comp,
msecs_to_jiffies(KVASER_USB_TIMEOUT)))
return -ETIMEDOUT;
return 0;
}
/* A single extended hydra command can be transmitted in multiple transfers
* We have to buffer partial hydra commands, and handle them on next callback.
*/
static void kvaser_usb_hydra_read_bulk_callback(struct kvaser_usb *dev,
void *buf, int len)
{
unsigned long irq_flags;
struct kvaser_cmd *cmd;
int pos = 0;
size_t cmd_len;
struct kvaser_usb_dev_card_data_hydra *card_data =
&dev->card_data.hydra;
int usb_rx_leftover_len;
spinlock_t *usb_rx_leftover_lock = &card_data->usb_rx_leftover_lock;
spin_lock_irqsave(usb_rx_leftover_lock, irq_flags);
usb_rx_leftover_len = card_data->usb_rx_leftover_len;
if (usb_rx_leftover_len) {
int remaining_bytes;
cmd = (struct kvaser_cmd *)card_data->usb_rx_leftover;
cmd_len = kvaser_usb_hydra_cmd_size(cmd);
remaining_bytes = min_t(unsigned int, len,
cmd_len - usb_rx_leftover_len);
/* Make sure we do not overflow usb_rx_leftover */
if (remaining_bytes + usb_rx_leftover_len >
KVASER_USB_HYDRA_MAX_CMD_LEN) {
dev_err(&dev->intf->dev, "Format error\n");
spin_unlock_irqrestore(usb_rx_leftover_lock, irq_flags);
return;
}
memcpy(card_data->usb_rx_leftover + usb_rx_leftover_len, buf,
remaining_bytes);
pos += remaining_bytes;
if (remaining_bytes + usb_rx_leftover_len == cmd_len) {
kvaser_usb_hydra_handle_cmd(dev, cmd);
usb_rx_leftover_len = 0;
} else {
/* Command still not complete */
usb_rx_leftover_len += remaining_bytes;
}
card_data->usb_rx_leftover_len = usb_rx_leftover_len;
}
spin_unlock_irqrestore(usb_rx_leftover_lock, irq_flags);
while (pos < len) {
cmd = buf + pos;
cmd_len = kvaser_usb_hydra_cmd_size(cmd);
if (pos + cmd_len > len) {
/* We got first part of a command */
int leftover_bytes;
leftover_bytes = len - pos;
/* Make sure we do not overflow usb_rx_leftover */
if (leftover_bytes > KVASER_USB_HYDRA_MAX_CMD_LEN) {
dev_err(&dev->intf->dev, "Format error\n");
return;
}
spin_lock_irqsave(usb_rx_leftover_lock, irq_flags);
memcpy(card_data->usb_rx_leftover, buf + pos,
leftover_bytes);
card_data->usb_rx_leftover_len = leftover_bytes;
spin_unlock_irqrestore(usb_rx_leftover_lock, irq_flags);
break;
}
kvaser_usb_hydra_handle_cmd(dev, cmd);
pos += cmd_len;
}
}
static void *
kvaser_usb_hydra_frame_to_cmd(const struct kvaser_usb_net_priv *priv,
const struct sk_buff *skb, int *frame_len,
int *cmd_len, u16 transid)
{
void *buf;
if (priv->dev->card_data.capabilities & KVASER_USB_HYDRA_CAP_EXT_CMD)
buf = kvaser_usb_hydra_frame_to_cmd_ext(priv, skb, frame_len,
cmd_len, transid);
else
buf = kvaser_usb_hydra_frame_to_cmd_std(priv, skb, frame_len,
cmd_len, transid);
return buf;
}
const struct kvaser_usb_dev_ops kvaser_usb_hydra_dev_ops = {
.dev_set_mode = kvaser_usb_hydra_set_mode,
.dev_set_bittiming = kvaser_usb_hydra_set_bittiming,
.dev_set_data_bittiming = kvaser_usb_hydra_set_data_bittiming,
.dev_get_berr_counter = kvaser_usb_hydra_get_berr_counter,
.dev_setup_endpoints = kvaser_usb_hydra_setup_endpoints,
.dev_init_card = kvaser_usb_hydra_init_card,
.dev_get_software_info = kvaser_usb_hydra_get_software_info,
.dev_get_software_details = kvaser_usb_hydra_get_software_details,
.dev_get_card_info = kvaser_usb_hydra_get_card_info,
.dev_get_capabilities = kvaser_usb_hydra_get_capabilities,
.dev_set_opt_mode = kvaser_usb_hydra_set_opt_mode,
.dev_start_chip = kvaser_usb_hydra_start_chip,
.dev_stop_chip = kvaser_usb_hydra_stop_chip,
.dev_reset_chip = NULL,
.dev_flush_queue = kvaser_usb_hydra_flush_queue,
.dev_read_bulk_callback = kvaser_usb_hydra_read_bulk_callback,
.dev_frame_to_cmd = kvaser_usb_hydra_frame_to_cmd,
};
static const struct kvaser_usb_dev_cfg kvaser_usb_hydra_dev_cfg_kcan = {
.clock = {
.freq = 80000000,
},
.timestamp_freq = 80,
.bittiming_const = &kvaser_usb_hydra_kcan_bittiming_c,
.data_bittiming_const = &kvaser_usb_hydra_kcan_bittiming_c,
};
static const struct kvaser_usb_dev_cfg kvaser_usb_hydra_dev_cfg_flexc = {
.clock = {
.freq = 24000000,
},
.timestamp_freq = 1,
.bittiming_const = &kvaser_usb_hydra_flexc_bittiming_c,
};
/*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation version 2.
*
* Parts of this driver are based on the following:
// SPDX-License-Identifier: GPL-2.0
/* Parts of this driver are based on the following:
* - Kvaser linux leaf driver (version 4.78)
* - CAN driver for esd CAN-USB/2
* - Kvaser linux usbcanII driver (version 5.3)
*
* Copyright (C) 2002-2006 KVASER AB, Sweden. All rights reserved.
* Copyright (C) 2002-2018 KVASER AB, Sweden. All rights reserved.
* Copyright (C) 2010 Matthias Fuchs <matthias.fuchs@esd.eu>, esd gmbh
* Copyright (C) 2012 Olivier Sobrie <olivier@sobrie.be>
* Copyright (C) 2015 Valeo S.A.
*/
#include <linux/spinlock.h>
#include <linux/kernel.h>
#include <linux/completion.h>
#include <linux/module.h>
#include <linux/device.h>
#include <linux/gfp.h>
#include <linux/jiffies.h>
#include <linux/kernel.h>
#include <linux/netdevice.h>
#include <linux/spinlock.h>
#include <linux/string.h>
#include <linux/types.h>
#include <linux/usb.h>
#include <linux/can.h>
#include <linux/can/dev.h>
#include <linux/can/error.h>
#include <linux/can/netlink.h>
#define MAX_RX_URBS 4
#define START_TIMEOUT 1000 /* msecs */
#define STOP_TIMEOUT 1000 /* msecs */
#define USB_SEND_TIMEOUT 1000 /* msecs */
#define USB_RECV_TIMEOUT 1000 /* msecs */
#define RX_BUFFER_SIZE 3072
#define CAN_USB_CLOCK 8000000
#define MAX_NET_DEVICES 3
#define MAX_USBCAN_NET_DEVICES 2
/* Kvaser Leaf USB devices */
#define KVASER_VENDOR_ID 0x0bfd
#define USB_LEAF_DEVEL_PRODUCT_ID 10
#define USB_LEAF_LITE_PRODUCT_ID 11
#define USB_LEAF_PRO_PRODUCT_ID 12
#define USB_LEAF_SPRO_PRODUCT_ID 14
#define USB_LEAF_PRO_LS_PRODUCT_ID 15
#define USB_LEAF_PRO_SWC_PRODUCT_ID 16
#define USB_LEAF_PRO_LIN_PRODUCT_ID 17
#define USB_LEAF_SPRO_LS_PRODUCT_ID 18
#define USB_LEAF_SPRO_SWC_PRODUCT_ID 19
#define USB_MEMO2_DEVEL_PRODUCT_ID 22
#define USB_MEMO2_HSHS_PRODUCT_ID 23
#define USB_UPRO_HSHS_PRODUCT_ID 24
#define USB_LEAF_LITE_GI_PRODUCT_ID 25
#define USB_LEAF_PRO_OBDII_PRODUCT_ID 26
#define USB_MEMO2_HSLS_PRODUCT_ID 27
#define USB_LEAF_LITE_CH_PRODUCT_ID 28
#define USB_BLACKBIRD_SPRO_PRODUCT_ID 29
#define USB_OEM_MERCURY_PRODUCT_ID 34
#define USB_OEM_LEAF_PRODUCT_ID 35
#define USB_CAN_R_PRODUCT_ID 39
#define USB_LEAF_LITE_V2_PRODUCT_ID 288
#define USB_MINI_PCIE_HS_PRODUCT_ID 289
#define USB_LEAF_LIGHT_HS_V2_OEM_PRODUCT_ID 290
#define USB_USBCAN_LIGHT_2HS_PRODUCT_ID 291
#define USB_MINI_PCIE_2HS_PRODUCT_ID 292
static inline bool kvaser_is_leaf(const struct usb_device_id *id)
{
return id->idProduct >= USB_LEAF_DEVEL_PRODUCT_ID &&
id->idProduct <= USB_MINI_PCIE_2HS_PRODUCT_ID;
}
/* Kvaser USBCan-II devices */
#define USB_USBCAN_REVB_PRODUCT_ID 2
#define USB_VCI2_PRODUCT_ID 3
#define USB_USBCAN2_PRODUCT_ID 4
#define USB_MEMORATOR_PRODUCT_ID 5
#include "kvaser_usb.h"
static inline bool kvaser_is_usbcan(const struct usb_device_id *id)
{
return id->idProduct >= USB_USBCAN_REVB_PRODUCT_ID &&
id->idProduct <= USB_MEMORATOR_PRODUCT_ID;
}
/* Forward declaration */
static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg;
/* USB devices features */
#define KVASER_HAS_SILENT_MODE BIT(0)
#define KVASER_HAS_TXRX_ERRORS BIT(1)
#define CAN_USB_CLOCK 8000000
#define MAX_USBCAN_NET_DEVICES 2
/* Message header size */
#define MSG_HEADER_LEN 2
/* Command header size */
#define CMD_HEADER_LEN 2
/* Can message flags */
/* Kvaser CAN message flags */
#define MSG_FLAG_ERROR_FRAME BIT(0)
#define MSG_FLAG_OVERRUN BIT(1)
#define MSG_FLAG_NERR BIT(2)
......@@ -98,48 +47,37 @@ static inline bool kvaser_is_usbcan(const struct usb_device_id *id)
#define MSG_FLAG_TX_ACK BIT(6)
#define MSG_FLAG_TX_REQUEST BIT(7)
/* Can states (M16C CxSTRH register) */
/* CAN states (M16C CxSTRH register) */
#define M16C_STATE_BUS_RESET BIT(0)
#define M16C_STATE_BUS_ERROR BIT(4)
#define M16C_STATE_BUS_PASSIVE BIT(5)
#define M16C_STATE_BUS_OFF BIT(6)
/* Can msg ids */
/* Leaf/usbcan command ids */
#define CMD_RX_STD_MESSAGE 12
#define CMD_TX_STD_MESSAGE 13
#define CMD_RX_EXT_MESSAGE 14
#define CMD_TX_EXT_MESSAGE 15
#define CMD_SET_BUS_PARAMS 16
#define CMD_GET_BUS_PARAMS 17
#define CMD_GET_BUS_PARAMS_REPLY 18
#define CMD_GET_CHIP_STATE 19
#define CMD_CHIP_STATE_EVENT 20
#define CMD_SET_CTRL_MODE 21
#define CMD_GET_CTRL_MODE 22
#define CMD_GET_CTRL_MODE_REPLY 23
#define CMD_RESET_CHIP 24
#define CMD_RESET_CARD 25
#define CMD_START_CHIP 26
#define CMD_START_CHIP_REPLY 27
#define CMD_STOP_CHIP 28
#define CMD_STOP_CHIP_REPLY 29
#define CMD_LEAF_GET_CARD_INFO2 32
#define CMD_USBCAN_RESET_CLOCK 32
#define CMD_USBCAN_CLOCK_OVERFLOW_EVENT 33
#define CMD_GET_CARD_INFO 34
#define CMD_GET_CARD_INFO_REPLY 35
#define CMD_GET_SOFTWARE_INFO 38
#define CMD_GET_SOFTWARE_INFO_REPLY 39
#define CMD_ERROR_EVENT 45
#define CMD_FLUSH_QUEUE 48
#define CMD_RESET_ERROR_COUNTER 49
#define CMD_TX_ACKNOWLEDGE 50
#define CMD_CAN_ERROR_EVENT 51
#define CMD_FLUSH_QUEUE_REPLY 68
#define CMD_LEAF_USB_THROTTLE 77
#define CMD_LEAF_LOG_MESSAGE 106
/* error factors */
......@@ -179,33 +117,16 @@ static inline bool kvaser_is_usbcan(const struct usb_device_id *id)
/* Extended CAN identifier flag */
#define KVASER_EXTENDED_FRAME BIT(31)
/* Kvaser USB CAN dongles are divided into two major families:
* - Leaf: Based on Renesas M32C, running firmware labeled as 'filo'
* - UsbcanII: Based on Renesas M16C, running firmware labeled as 'helios'
*/
enum kvaser_usb_family {
KVASER_LEAF,
KVASER_USBCAN,
};
struct kvaser_msg_simple {
struct kvaser_cmd_simple {
u8 tid;
u8 channel;
} __packed;
struct kvaser_msg_cardinfo {
struct kvaser_cmd_cardinfo {
u8 tid;
u8 nchannels;
union {
struct {
__le32 serial_number;
__le32 padding;
} __packed leaf0;
struct {
__le32 serial_number_low;
__le32 serial_number_high;
} __packed usbcan0;
} __packed;
__le32 serial_number;
__le32 padding0;
__le32 clock_resolution;
__le32 mfgdate;
u8 ean[8];
......@@ -218,17 +139,10 @@ struct kvaser_msg_cardinfo {
u8 padding;
} __packed usbcan1;
} __packed;
__le16 padding;
__le16 padding1;
} __packed;
struct kvaser_msg_cardinfo2 {
u8 tid;
u8 reserved;
u8 pcb_id[24];
__le32 oem_unlock_code;
} __packed;
struct leaf_msg_softinfo {
struct leaf_cmd_softinfo {
u8 tid;
u8 padding0;
__le32 sw_options;
......@@ -237,7 +151,7 @@ struct leaf_msg_softinfo {
__le16 padding1[9];
} __packed;
struct usbcan_msg_softinfo {
struct usbcan_cmd_softinfo {
u8 tid;
u8 fw_name[5];
__le16 max_outstanding_tx;
......@@ -247,7 +161,7 @@ struct usbcan_msg_softinfo {
__le16 sw_options;
} __packed;
struct kvaser_msg_busparams {
struct kvaser_cmd_busparams {
u8 tid;
u8 channel;
__le32 bitrate;
......@@ -257,10 +171,10 @@ struct kvaser_msg_busparams {
u8 no_samp;
} __packed;
struct kvaser_msg_tx_can {
struct kvaser_cmd_tx_can {
u8 channel;
u8 tid;
u8 msg[14];
u8 data[14];
union {
struct {
u8 padding;
......@@ -273,28 +187,28 @@ struct kvaser_msg_tx_can {
} __packed;
} __packed;
struct kvaser_msg_rx_can_header {
struct kvaser_cmd_rx_can_header {
u8 channel;
u8 flag;
} __packed;
struct leaf_msg_rx_can {
struct leaf_cmd_rx_can {
u8 channel;
u8 flag;
__le16 time[3];
u8 msg[14];
u8 data[14];
} __packed;
struct usbcan_msg_rx_can {
struct usbcan_cmd_rx_can {
u8 channel;
u8 flag;
u8 msg[14];
u8 data[14];
__le16 time;
} __packed;
struct leaf_msg_chip_state_event {
struct leaf_cmd_chip_state_event {
u8 tid;
u8 channel;
......@@ -306,7 +220,7 @@ struct leaf_msg_chip_state_event {
u8 padding[3];
} __packed;
struct usbcan_msg_chip_state_event {
struct usbcan_cmd_chip_state_event {
u8 tid;
u8 channel;
......@@ -318,29 +232,12 @@ struct usbcan_msg_chip_state_event {
u8 padding[3];
} __packed;
struct kvaser_msg_tx_acknowledge_header {
u8 channel;
u8 tid;
} __packed;
struct leaf_msg_tx_acknowledge {
u8 channel;
u8 tid;
__le16 time[3];
u8 flags;
u8 time_offset;
} __packed;
struct usbcan_msg_tx_acknowledge {
struct kvaser_cmd_tx_acknowledge_header {
u8 channel;
u8 tid;
__le16 time;
__le16 padding;
} __packed;
struct leaf_msg_error_event {
struct leaf_cmd_error_event {
u8 tid;
u8 flags;
__le16 time[3];
......@@ -352,7 +249,7 @@ struct leaf_msg_error_event {
u8 error_factor;
} __packed;
struct usbcan_msg_error_event {
struct usbcan_cmd_error_event {
u8 tid;
u8 padding;
u8 tx_errors_count_ch0;
......@@ -364,21 +261,21 @@ struct usbcan_msg_error_event {
__le16 time;
} __packed;
struct kvaser_msg_ctrl_mode {
struct kvaser_cmd_ctrl_mode {
u8 tid;
u8 channel;
u8 ctrl_mode;
u8 padding[3];
} __packed;
struct kvaser_msg_flush_queue {
struct kvaser_cmd_flush_queue {
u8 tid;
u8 channel;
u8 flags;
u8 padding[3];
} __packed;
struct leaf_msg_log_message {
struct leaf_cmd_log_message {
u8 channel;
u8 flags;
__le16 time[3];
......@@ -388,38 +285,35 @@ struct leaf_msg_log_message {
u8 data[8];
} __packed;
struct kvaser_msg {
struct kvaser_cmd {
u8 len;
u8 id;
union {
struct kvaser_msg_simple simple;
struct kvaser_msg_cardinfo cardinfo;
struct kvaser_msg_cardinfo2 cardinfo2;
struct kvaser_msg_busparams busparams;
struct kvaser_cmd_simple simple;
struct kvaser_cmd_cardinfo cardinfo;
struct kvaser_cmd_busparams busparams;
struct kvaser_msg_rx_can_header rx_can_header;
struct kvaser_msg_tx_acknowledge_header tx_acknowledge_header;
struct kvaser_cmd_rx_can_header rx_can_header;
struct kvaser_cmd_tx_acknowledge_header tx_acknowledge_header;
union {
struct leaf_msg_softinfo softinfo;
struct leaf_msg_rx_can rx_can;
struct leaf_msg_chip_state_event chip_state_event;
struct leaf_msg_tx_acknowledge tx_acknowledge;
struct leaf_msg_error_event error_event;
struct leaf_msg_log_message log_message;
struct leaf_cmd_softinfo softinfo;
struct leaf_cmd_rx_can rx_can;
struct leaf_cmd_chip_state_event chip_state_event;
struct leaf_cmd_error_event error_event;
struct leaf_cmd_log_message log_message;
} __packed leaf;
union {
struct usbcan_msg_softinfo softinfo;
struct usbcan_msg_rx_can rx_can;
struct usbcan_msg_chip_state_event chip_state_event;
struct usbcan_msg_tx_acknowledge tx_acknowledge;
struct usbcan_msg_error_event error_event;
struct usbcan_cmd_softinfo softinfo;
struct usbcan_cmd_rx_can rx_can;
struct usbcan_cmd_chip_state_event chip_state_event;
struct usbcan_cmd_error_event error_event;
} __packed usbcan;
struct kvaser_msg_tx_can tx_can;
struct kvaser_msg_ctrl_mode ctrl_mode;
struct kvaser_msg_flush_queue flush_queue;
struct kvaser_cmd_tx_can tx_can;
struct kvaser_cmd_ctrl_mode ctrl_mode;
struct kvaser_cmd_flush_queue flush_queue;
} u;
} __packed;
......@@ -433,7 +327,7 @@ struct kvaser_msg {
* and decide the error event's channel. Thus for USBCAN, the channel
* field is only advisory.
*/
struct kvaser_usb_error_summary {
struct kvaser_usb_err_summary {
u8 channel, status, txerr, rxerr;
union {
struct {
......@@ -446,176 +340,101 @@ struct kvaser_usb_error_summary {
};
};
/* Context for an outstanding, not yet ACKed, transmission */
struct kvaser_usb_tx_urb_context {
struct kvaser_usb_net_priv *priv;
u32 echo_index;
int dlc;
};
struct kvaser_usb {
struct usb_device *udev;
struct kvaser_usb_net_priv *nets[MAX_NET_DEVICES];
struct usb_endpoint_descriptor *bulk_in, *bulk_out;
struct usb_anchor rx_submitted;
/* @max_tx_urbs: Firmware-reported maximum number of outstanding,
* not yet ACKed, transmissions on this device. This value is
* also used as a sentinel for marking free tx contexts.
*/
u32 fw_version;
unsigned int nchannels;
unsigned int max_tx_urbs;
enum kvaser_usb_family family;
bool rxinitdone;
void *rxbuf[MAX_RX_URBS];
dma_addr_t rxbuf_dma[MAX_RX_URBS];
};
static void *
kvaser_usb_leaf_frame_to_cmd(const struct kvaser_usb_net_priv *priv,
const struct sk_buff *skb, int *frame_len,
int *cmd_len, u16 transid)
{
struct kvaser_usb *dev = priv->dev;
struct kvaser_cmd *cmd;
u8 *cmd_tx_can_flags = NULL; /* GCC */
struct can_frame *cf = (struct can_frame *)skb->data;
struct kvaser_usb_net_priv {
struct can_priv can;
struct can_berr_counter bec;
*frame_len = cf->can_dlc;
struct kvaser_usb *dev;
struct net_device *netdev;
int channel;
cmd = kmalloc(sizeof(*cmd), GFP_ATOMIC);
if (cmd) {
cmd->u.tx_can.tid = transid & 0xff;
cmd->len = *cmd_len = CMD_HEADER_LEN +
sizeof(struct kvaser_cmd_tx_can);
cmd->u.tx_can.channel = priv->channel;
struct completion start_comp, stop_comp;
struct usb_anchor tx_submitted;
switch (dev->card_data.leaf.family) {
case KVASER_LEAF:
cmd_tx_can_flags = &cmd->u.tx_can.leaf.flags;
break;
case KVASER_USBCAN:
cmd_tx_can_flags = &cmd->u.tx_can.usbcan.flags;
break;
}
spinlock_t tx_contexts_lock;
int active_tx_contexts;
struct kvaser_usb_tx_urb_context tx_contexts[];
};
*cmd_tx_can_flags = 0;
static const struct usb_device_id kvaser_usb_table[] = {
/* Leaf family IDs */
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_DEVEL_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_PRODUCT_ID),
.driver_info = KVASER_HAS_TXRX_ERRORS |
KVASER_HAS_SILENT_MODE },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_SPRO_PRODUCT_ID),
.driver_info = KVASER_HAS_TXRX_ERRORS |
KVASER_HAS_SILENT_MODE },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_LS_PRODUCT_ID),
.driver_info = KVASER_HAS_TXRX_ERRORS |
KVASER_HAS_SILENT_MODE },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_SWC_PRODUCT_ID),
.driver_info = KVASER_HAS_TXRX_ERRORS |
KVASER_HAS_SILENT_MODE },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_LIN_PRODUCT_ID),
.driver_info = KVASER_HAS_TXRX_ERRORS |
KVASER_HAS_SILENT_MODE },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_SPRO_LS_PRODUCT_ID),
.driver_info = KVASER_HAS_TXRX_ERRORS |
KVASER_HAS_SILENT_MODE },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_SPRO_SWC_PRODUCT_ID),
.driver_info = KVASER_HAS_TXRX_ERRORS |
KVASER_HAS_SILENT_MODE },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO2_DEVEL_PRODUCT_ID),
.driver_info = KVASER_HAS_TXRX_ERRORS |
KVASER_HAS_SILENT_MODE },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO2_HSHS_PRODUCT_ID),
.driver_info = KVASER_HAS_TXRX_ERRORS |
KVASER_HAS_SILENT_MODE },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_UPRO_HSHS_PRODUCT_ID),
.driver_info = KVASER_HAS_TXRX_ERRORS },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_GI_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_OBDII_PRODUCT_ID),
.driver_info = KVASER_HAS_TXRX_ERRORS |
KVASER_HAS_SILENT_MODE },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO2_HSLS_PRODUCT_ID),
.driver_info = KVASER_HAS_TXRX_ERRORS },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_CH_PRODUCT_ID),
.driver_info = KVASER_HAS_TXRX_ERRORS },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_BLACKBIRD_SPRO_PRODUCT_ID),
.driver_info = KVASER_HAS_TXRX_ERRORS },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_OEM_MERCURY_PRODUCT_ID),
.driver_info = KVASER_HAS_TXRX_ERRORS },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_OEM_LEAF_PRODUCT_ID),
.driver_info = KVASER_HAS_TXRX_ERRORS },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_CAN_R_PRODUCT_ID),
.driver_info = KVASER_HAS_TXRX_ERRORS },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_V2_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_HS_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_HS_V2_OEM_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_2HS_PRODUCT_ID) },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_2HS_PRODUCT_ID) },
/* USBCANII family IDs */
{ USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN2_PRODUCT_ID),
.driver_info = KVASER_HAS_TXRX_ERRORS },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_REVB_PRODUCT_ID),
.driver_info = KVASER_HAS_TXRX_ERRORS },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_MEMORATOR_PRODUCT_ID),
.driver_info = KVASER_HAS_TXRX_ERRORS },
{ USB_DEVICE(KVASER_VENDOR_ID, USB_VCI2_PRODUCT_ID),
.driver_info = KVASER_HAS_TXRX_ERRORS },
{ }
};
MODULE_DEVICE_TABLE(usb, kvaser_usb_table);
if (cf->can_id & CAN_EFF_FLAG) {
cmd->id = CMD_TX_EXT_MESSAGE;
cmd->u.tx_can.data[0] = (cf->can_id >> 24) & 0x1f;
cmd->u.tx_can.data[1] = (cf->can_id >> 18) & 0x3f;
cmd->u.tx_can.data[2] = (cf->can_id >> 14) & 0x0f;
cmd->u.tx_can.data[3] = (cf->can_id >> 6) & 0xff;
cmd->u.tx_can.data[4] = cf->can_id & 0x3f;
} else {
cmd->id = CMD_TX_STD_MESSAGE;
cmd->u.tx_can.data[0] = (cf->can_id >> 6) & 0x1f;
cmd->u.tx_can.data[1] = cf->can_id & 0x3f;
}
static inline int kvaser_usb_send_msg(const struct kvaser_usb *dev,
struct kvaser_msg *msg)
{
int actual_len;
cmd->u.tx_can.data[5] = cf->can_dlc;
memcpy(&cmd->u.tx_can.data[6], cf->data, cf->can_dlc);
return usb_bulk_msg(dev->udev,
usb_sndbulkpipe(dev->udev,
dev->bulk_out->bEndpointAddress),
msg, msg->len, &actual_len,
USB_SEND_TIMEOUT);
if (cf->can_id & CAN_RTR_FLAG)
*cmd_tx_can_flags |= MSG_FLAG_REMOTE_FRAME;
}
return cmd;
}
static int kvaser_usb_wait_msg(const struct kvaser_usb *dev, u8 id,
struct kvaser_msg *msg)
static int kvaser_usb_leaf_wait_cmd(const struct kvaser_usb *dev, u8 id,
struct kvaser_cmd *cmd)
{
struct kvaser_msg *tmp;
struct kvaser_cmd *tmp;
void *buf;
int actual_len;
int err;
int pos;
unsigned long to = jiffies + msecs_to_jiffies(USB_RECV_TIMEOUT);
unsigned long to = jiffies + msecs_to_jiffies(KVASER_USB_TIMEOUT);
buf = kzalloc(RX_BUFFER_SIZE, GFP_KERNEL);
buf = kzalloc(KVASER_USB_RX_BUFFER_SIZE, GFP_KERNEL);
if (!buf)
return -ENOMEM;
do {
err = usb_bulk_msg(dev->udev,
usb_rcvbulkpipe(dev->udev,
dev->bulk_in->bEndpointAddress),
buf, RX_BUFFER_SIZE, &actual_len,
USB_RECV_TIMEOUT);
err = kvaser_usb_recv_cmd(dev, buf, KVASER_USB_RX_BUFFER_SIZE,
&actual_len);
if (err < 0)
goto end;
pos = 0;
while (pos <= actual_len - MSG_HEADER_LEN) {
while (pos <= actual_len - CMD_HEADER_LEN) {
tmp = buf + pos;
/* Handle messages crossing the USB endpoint max packet
/* Handle commands crossing the USB endpoint max packet
* size boundary. Check kvaser_usb_read_bulk_callback()
* for further details.
*/
if (tmp->len == 0) {
pos = round_up(pos, le16_to_cpu(dev->bulk_in->
wMaxPacketSize));
pos = round_up(pos,
le16_to_cpu
(dev->bulk_in->wMaxPacketSize));
continue;
}
if (pos + tmp->len > actual_len) {
dev_err_ratelimited(dev->udev->dev.parent,
dev_err_ratelimited(&dev->intf->dev,
"Format error\n");
break;
}
if (tmp->id == id) {
memcpy(msg, tmp, tmp->len);
memcpy(cmd, tmp, tmp->len);
goto end;
}
......@@ -631,94 +450,109 @@ static int kvaser_usb_wait_msg(const struct kvaser_usb *dev, u8 id,
return err;
}
static int kvaser_usb_send_simple_msg(const struct kvaser_usb *dev,
u8 msg_id, int channel)
static int kvaser_usb_leaf_send_simple_cmd(const struct kvaser_usb *dev,
u8 cmd_id, int channel)
{
struct kvaser_msg *msg;
struct kvaser_cmd *cmd;
int rc;
msg = kmalloc(sizeof(*msg), GFP_KERNEL);
if (!msg)
cmd = kmalloc(sizeof(*cmd), GFP_KERNEL);
if (!cmd)
return -ENOMEM;
msg->id = msg_id;
msg->len = MSG_HEADER_LEN + sizeof(struct kvaser_msg_simple);
msg->u.simple.channel = channel;
msg->u.simple.tid = 0xff;
cmd->id = cmd_id;
cmd->len = CMD_HEADER_LEN + sizeof(struct kvaser_cmd_simple);
cmd->u.simple.channel = channel;
cmd->u.simple.tid = 0xff;
rc = kvaser_usb_send_msg(dev, msg);
rc = kvaser_usb_send_cmd(dev, cmd, cmd->len);
kfree(msg);
kfree(cmd);
return rc;
}
static int kvaser_usb_get_software_info(struct kvaser_usb *dev)
static int kvaser_usb_leaf_get_software_info_inner(struct kvaser_usb *dev)
{
struct kvaser_msg msg;
struct kvaser_cmd cmd;
int err;
err = kvaser_usb_send_simple_msg(dev, CMD_GET_SOFTWARE_INFO, 0);
err = kvaser_usb_leaf_send_simple_cmd(dev, CMD_GET_SOFTWARE_INFO, 0);
if (err)
return err;
err = kvaser_usb_wait_msg(dev, CMD_GET_SOFTWARE_INFO_REPLY, &msg);
err = kvaser_usb_leaf_wait_cmd(dev, CMD_GET_SOFTWARE_INFO_REPLY, &cmd);
if (err)
return err;
switch (dev->family) {
switch (dev->card_data.leaf.family) {
case KVASER_LEAF:
dev->fw_version = le32_to_cpu(msg.u.leaf.softinfo.fw_version);
dev->fw_version = le32_to_cpu(cmd.u.leaf.softinfo.fw_version);
dev->max_tx_urbs =
le16_to_cpu(msg.u.leaf.softinfo.max_outstanding_tx);
le16_to_cpu(cmd.u.leaf.softinfo.max_outstanding_tx);
break;
case KVASER_USBCAN:
dev->fw_version = le32_to_cpu(msg.u.usbcan.softinfo.fw_version);
dev->fw_version = le32_to_cpu(cmd.u.usbcan.softinfo.fw_version);
dev->max_tx_urbs =
le16_to_cpu(msg.u.usbcan.softinfo.max_outstanding_tx);
le16_to_cpu(cmd.u.usbcan.softinfo.max_outstanding_tx);
break;
}
return 0;
}
static int kvaser_usb_get_card_info(struct kvaser_usb *dev)
static int kvaser_usb_leaf_get_software_info(struct kvaser_usb *dev)
{
struct kvaser_msg msg;
int err;
int retry = 3;
err = kvaser_usb_send_simple_msg(dev, CMD_GET_CARD_INFO, 0);
/* On some x86 laptops, plugging a Kvaser device again after
* an unplug makes the firmware always ignore the very first
* command. For such a case, provide some room for retries
* instead of completely exiting the driver.
*/
do {
err = kvaser_usb_leaf_get_software_info_inner(dev);
} while (--retry && err == -ETIMEDOUT);
return err;
}
static int kvaser_usb_leaf_get_card_info(struct kvaser_usb *dev)
{
struct kvaser_cmd cmd;
int err;
err = kvaser_usb_leaf_send_simple_cmd(dev, CMD_GET_CARD_INFO, 0);
if (err)
return err;
err = kvaser_usb_wait_msg(dev, CMD_GET_CARD_INFO_REPLY, &msg);
err = kvaser_usb_leaf_wait_cmd(dev, CMD_GET_CARD_INFO_REPLY, &cmd);
if (err)
return err;
dev->nchannels = msg.u.cardinfo.nchannels;
if ((dev->nchannels > MAX_NET_DEVICES) ||
(dev->family == KVASER_USBCAN &&
dev->nchannels = cmd.u.cardinfo.nchannels;
if (dev->nchannels > KVASER_USB_MAX_NET_DEVICES ||
(dev->card_data.leaf.family == KVASER_USBCAN &&
dev->nchannels > MAX_USBCAN_NET_DEVICES))
return -EINVAL;
return 0;
}
static void kvaser_usb_tx_acknowledge(const struct kvaser_usb *dev,
const struct kvaser_msg *msg)
static void kvaser_usb_leaf_tx_acknowledge(const struct kvaser_usb *dev,
const struct kvaser_cmd *cmd)
{
struct net_device_stats *stats;
struct kvaser_usb_tx_urb_context *context;
struct kvaser_usb_net_priv *priv;
struct sk_buff *skb;
struct can_frame *cf;
unsigned long flags;
u8 channel, tid;
channel = msg->u.tx_acknowledge_header.channel;
tid = msg->u.tx_acknowledge_header.tid;
channel = cmd->u.tx_acknowledge_header.channel;
tid = cmd->u.tx_acknowledge_header.tid;
if (channel >= dev->nchannels) {
dev_err(dev->udev->dev.parent,
dev_err(&dev->intf->dev,
"Invalid channel number (%d)\n", channel);
return;
}
......@@ -733,8 +567,10 @@ static void kvaser_usb_tx_acknowledge(const struct kvaser_usb *dev,
context = &priv->tx_contexts[tid % dev->max_tx_urbs];
/* Sometimes the state change doesn't come after a bus-off event */
if (priv->can.restart_ms &&
(priv->can.state >= CAN_STATE_BUS_OFF)) {
if (priv->can.restart_ms && priv->can.state >= CAN_STATE_BUS_OFF) {
struct sk_buff *skb;
struct can_frame *cf;
skb = alloc_can_err_skb(priv->netdev, &cf);
if (skb) {
cf->can_id |= CAN_ERR_RESTARTED;
......@@ -766,66 +602,31 @@ static void kvaser_usb_tx_acknowledge(const struct kvaser_usb *dev,
spin_unlock_irqrestore(&priv->tx_contexts_lock, flags);
}
static void kvaser_usb_simple_msg_callback(struct urb *urb)
static int kvaser_usb_leaf_simple_cmd_async(struct kvaser_usb_net_priv *priv,
u8 cmd_id)
{
struct net_device *netdev = urb->context;
kfree(urb->transfer_buffer);
if (urb->status)
netdev_warn(netdev, "urb status received: %d\n",
urb->status);
}
static int kvaser_usb_simple_msg_async(struct kvaser_usb_net_priv *priv,
u8 msg_id)
{
struct kvaser_usb *dev = priv->dev;
struct net_device *netdev = priv->netdev;
struct kvaser_msg *msg;
struct urb *urb;
void *buf;
struct kvaser_cmd *cmd;
int err;
urb = usb_alloc_urb(0, GFP_ATOMIC);
if (!urb)
cmd = kmalloc(sizeof(*cmd), GFP_ATOMIC);
if (!cmd)
return -ENOMEM;
buf = kmalloc(sizeof(struct kvaser_msg), GFP_ATOMIC);
if (!buf) {
usb_free_urb(urb);
return -ENOMEM;
}
msg = (struct kvaser_msg *)buf;
msg->len = MSG_HEADER_LEN + sizeof(struct kvaser_msg_simple);
msg->id = msg_id;
msg->u.simple.channel = priv->channel;
usb_fill_bulk_urb(urb, dev->udev,
usb_sndbulkpipe(dev->udev,
dev->bulk_out->bEndpointAddress),
buf, msg->len,
kvaser_usb_simple_msg_callback, netdev);
usb_anchor_urb(urb, &priv->tx_submitted);
err = usb_submit_urb(urb, GFP_ATOMIC);
if (err) {
netdev_err(netdev, "Error transmitting URB\n");
usb_unanchor_urb(urb);
kfree(buf);
usb_free_urb(urb);
return err;
}
cmd->len = CMD_HEADER_LEN + sizeof(struct kvaser_cmd_simple);
cmd->id = cmd_id;
cmd->u.simple.channel = priv->channel;
usb_free_urb(urb);
err = kvaser_usb_send_cmd_async(priv, cmd, cmd->len);
if (err)
kfree(cmd);
return 0;
return err;
}
static void kvaser_usb_rx_error_update_can_state(struct kvaser_usb_net_priv *priv,
const struct kvaser_usb_error_summary *es,
struct can_frame *cf)
static void
kvaser_usb_leaf_rx_error_update_can_state(struct kvaser_usb_net_priv *priv,
const struct kvaser_usb_err_summary *es,
struct can_frame *cf)
{
struct kvaser_usb *dev = priv->dev;
struct net_device_stats *stats = &priv->netdev->stats;
......@@ -833,18 +634,19 @@ static void kvaser_usb_rx_error_update_can_state(struct kvaser_usb_net_priv *pri
netdev_dbg(priv->netdev, "Error status: 0x%02x\n", es->status);
new_state = cur_state = priv->can.state;
new_state = priv->can.state;
cur_state = priv->can.state;
if (es->status & (M16C_STATE_BUS_OFF | M16C_STATE_BUS_RESET))
if (es->status & (M16C_STATE_BUS_OFF | M16C_STATE_BUS_RESET)) {
new_state = CAN_STATE_BUS_OFF;
else if (es->status & M16C_STATE_BUS_PASSIVE)
} else if (es->status & M16C_STATE_BUS_PASSIVE) {
new_state = CAN_STATE_ERROR_PASSIVE;
else if (es->status & M16C_STATE_BUS_ERROR) {
} else if (es->status & M16C_STATE_BUS_ERROR) {
/* Guard against spurious error events after a busoff */
if (cur_state < CAN_STATE_BUS_OFF) {
if ((es->txerr >= 128) || (es->rxerr >= 128))
if (es->txerr >= 128 || es->rxerr >= 128)
new_state = CAN_STATE_ERROR_PASSIVE;
else if ((es->txerr >= 96) || (es->rxerr >= 96))
else if (es->txerr >= 96 || es->rxerr >= 96)
new_state = CAN_STATE_ERROR_WARNING;
else if (cur_state > CAN_STATE_ERROR_ACTIVE)
new_state = CAN_STATE_ERROR_ACTIVE;
......@@ -862,12 +664,11 @@ static void kvaser_usb_rx_error_update_can_state(struct kvaser_usb_net_priv *pri
}
if (priv->can.restart_ms &&
(cur_state >= CAN_STATE_BUS_OFF) &&
(new_state < CAN_STATE_BUS_OFF)) {
cur_state >= CAN_STATE_BUS_OFF &&
new_state < CAN_STATE_BUS_OFF)
priv->can.can_stats.restarts++;
}
switch (dev->family) {
switch (dev->card_data.leaf.family) {
case KVASER_LEAF:
if (es->leaf.error_factor) {
priv->can.can_stats.bus_error++;
......@@ -879,9 +680,8 @@ static void kvaser_usb_rx_error_update_can_state(struct kvaser_usb_net_priv *pri
stats->tx_errors++;
if (es->usbcan.error_state & USBCAN_ERROR_STATE_RX_ERROR)
stats->rx_errors++;
if (es->usbcan.error_state & USBCAN_ERROR_STATE_BUSERROR) {
if (es->usbcan.error_state & USBCAN_ERROR_STATE_BUSERROR)
priv->can.can_stats.bus_error++;
}
break;
}
......@@ -889,17 +689,19 @@ static void kvaser_usb_rx_error_update_can_state(struct kvaser_usb_net_priv *pri
priv->bec.rxerr = es->rxerr;
}
static void kvaser_usb_rx_error(const struct kvaser_usb *dev,
const struct kvaser_usb_error_summary *es)
static void kvaser_usb_leaf_rx_error(const struct kvaser_usb *dev,
const struct kvaser_usb_err_summary *es)
{
struct can_frame *cf, tmp_cf = { .can_id = CAN_ERR_FLAG, .can_dlc = CAN_ERR_DLC };
struct can_frame *cf;
struct can_frame tmp_cf = { .can_id = CAN_ERR_FLAG,
.can_dlc = CAN_ERR_DLC };
struct sk_buff *skb;
struct net_device_stats *stats;
struct kvaser_usb_net_priv *priv;
enum can_state old_state, new_state;
if (es->channel >= dev->nchannels) {
dev_err(dev->udev->dev.parent,
dev_err(&dev->intf->dev,
"Invalid channel number (%d)\n", es->channel);
return;
}
......@@ -907,18 +709,18 @@ static void kvaser_usb_rx_error(const struct kvaser_usb *dev,
priv = dev->nets[es->channel];
stats = &priv->netdev->stats;
/* Update all of the can interface's state and error counters before
/* Update all of the CAN interface's state and error counters before
* trying any memory allocation that can actually fail with -ENOMEM.
*
* We send a temporary stack-allocated error can frame to
* We send a temporary stack-allocated error CAN frame to
* can_change_state() for the very same reason.
*
* TODO: Split can_change_state() responsibility between updating the
* can interface's state and counters, and the setting up of can error
* CAN interface's state and counters, and the setting up of CAN error
* frame ID and data to userspace. Remove stack allocation afterwards.
*/
old_state = priv->can.state;
kvaser_usb_rx_error_update_can_state(priv, es, &tmp_cf);
kvaser_usb_leaf_rx_error_update_can_state(priv, es, &tmp_cf);
new_state = priv->can.state;
skb = alloc_can_err_skb(priv->netdev, &cf);
......@@ -932,19 +734,20 @@ static void kvaser_usb_rx_error(const struct kvaser_usb *dev,
if (es->status &
(M16C_STATE_BUS_OFF | M16C_STATE_BUS_RESET)) {
if (!priv->can.restart_ms)
kvaser_usb_simple_msg_async(priv, CMD_STOP_CHIP);
kvaser_usb_leaf_simple_cmd_async(priv,
CMD_STOP_CHIP);
netif_carrier_off(priv->netdev);
}
if (priv->can.restart_ms &&
(old_state >= CAN_STATE_BUS_OFF) &&
(new_state < CAN_STATE_BUS_OFF)) {
old_state >= CAN_STATE_BUS_OFF &&
new_state < CAN_STATE_BUS_OFF) {
cf->can_id |= CAN_ERR_RESTARTED;
netif_carrier_on(priv->netdev);
}
}
switch (dev->family) {
switch (dev->card_data.leaf.family) {
case KVASER_LEAF:
if (es->leaf.error_factor) {
cf->can_id |= CAN_ERR_BUSERROR | CAN_ERR_PROT;
......@@ -966,9 +769,8 @@ static void kvaser_usb_rx_error(const struct kvaser_usb *dev,
}
break;
case KVASER_USBCAN:
if (es->usbcan.error_state & USBCAN_ERROR_STATE_BUSERROR) {
if (es->usbcan.error_state & USBCAN_ERROR_STATE_BUSERROR)
cf->can_id |= CAN_ERR_BUSERROR;
}
break;
}
......@@ -980,19 +782,20 @@ static void kvaser_usb_rx_error(const struct kvaser_usb *dev,
netif_rx(skb);
}
/* For USBCAN, report error to userspace iff the channels's errors counter
/* For USBCAN, report error to userspace if the channels's errors counter
* has changed, or we're the only channel seeing a bus error state.
*/
static void kvaser_usbcan_conditionally_rx_error(const struct kvaser_usb *dev,
struct kvaser_usb_error_summary *es)
static void
kvaser_usb_leaf_usbcan_conditionally_rx_error(const struct kvaser_usb *dev,
struct kvaser_usb_err_summary *es)
{
struct kvaser_usb_net_priv *priv;
int channel;
unsigned int channel;
bool report_error;
channel = es->channel;
if (channel >= dev->nchannels) {
dev_err(dev->udev->dev.parent,
dev_err(&dev->intf->dev,
"Invalid channel number (%d)\n", channel);
return;
}
......@@ -1015,136 +818,119 @@ static void kvaser_usbcan_conditionally_rx_error(const struct kvaser_usb *dev,
}
if (report_error)
kvaser_usb_rx_error(dev, es);
kvaser_usb_leaf_rx_error(dev, es);
}
static void kvaser_usbcan_rx_error(const struct kvaser_usb *dev,
const struct kvaser_msg *msg)
static void kvaser_usb_leaf_usbcan_rx_error(const struct kvaser_usb *dev,
const struct kvaser_cmd *cmd)
{
struct kvaser_usb_error_summary es = { };
struct kvaser_usb_err_summary es = { };
switch (msg->id) {
switch (cmd->id) {
/* Sometimes errors are sent as unsolicited chip state events */
case CMD_CHIP_STATE_EVENT:
es.channel = msg->u.usbcan.chip_state_event.channel;
es.status = msg->u.usbcan.chip_state_event.status;
es.txerr = msg->u.usbcan.chip_state_event.tx_errors_count;
es.rxerr = msg->u.usbcan.chip_state_event.rx_errors_count;
kvaser_usbcan_conditionally_rx_error(dev, &es);
es.channel = cmd->u.usbcan.chip_state_event.channel;
es.status = cmd->u.usbcan.chip_state_event.status;
es.txerr = cmd->u.usbcan.chip_state_event.tx_errors_count;
es.rxerr = cmd->u.usbcan.chip_state_event.rx_errors_count;
kvaser_usb_leaf_usbcan_conditionally_rx_error(dev, &es);
break;
case CMD_CAN_ERROR_EVENT:
es.channel = 0;
es.status = msg->u.usbcan.error_event.status_ch0;
es.txerr = msg->u.usbcan.error_event.tx_errors_count_ch0;
es.rxerr = msg->u.usbcan.error_event.rx_errors_count_ch0;
es.status = cmd->u.usbcan.error_event.status_ch0;
es.txerr = cmd->u.usbcan.error_event.tx_errors_count_ch0;
es.rxerr = cmd->u.usbcan.error_event.rx_errors_count_ch0;
es.usbcan.other_ch_status =
msg->u.usbcan.error_event.status_ch1;
kvaser_usbcan_conditionally_rx_error(dev, &es);
cmd->u.usbcan.error_event.status_ch1;
kvaser_usb_leaf_usbcan_conditionally_rx_error(dev, &es);
/* The USBCAN firmware supports up to 2 channels.
* Now that ch0 was checked, check if ch1 has any errors.
*/
if (dev->nchannels == MAX_USBCAN_NET_DEVICES) {
es.channel = 1;
es.status = msg->u.usbcan.error_event.status_ch1;
es.txerr = msg->u.usbcan.error_event.tx_errors_count_ch1;
es.rxerr = msg->u.usbcan.error_event.rx_errors_count_ch1;
es.status = cmd->u.usbcan.error_event.status_ch1;
es.txerr =
cmd->u.usbcan.error_event.tx_errors_count_ch1;
es.rxerr =
cmd->u.usbcan.error_event.rx_errors_count_ch1;
es.usbcan.other_ch_status =
msg->u.usbcan.error_event.status_ch0;
kvaser_usbcan_conditionally_rx_error(dev, &es);
cmd->u.usbcan.error_event.status_ch0;
kvaser_usb_leaf_usbcan_conditionally_rx_error(dev, &es);
}
break;
default:
dev_err(dev->udev->dev.parent, "Invalid msg id (%d)\n",
msg->id);
dev_err(&dev->intf->dev, "Invalid cmd id (%d)\n", cmd->id);
}
}
static void kvaser_leaf_rx_error(const struct kvaser_usb *dev,
const struct kvaser_msg *msg)
static void kvaser_usb_leaf_leaf_rx_error(const struct kvaser_usb *dev,
const struct kvaser_cmd *cmd)
{
struct kvaser_usb_error_summary es = { };
struct kvaser_usb_err_summary es = { };
switch (msg->id) {
switch (cmd->id) {
case CMD_CAN_ERROR_EVENT:
es.channel = msg->u.leaf.error_event.channel;
es.status = msg->u.leaf.error_event.status;
es.txerr = msg->u.leaf.error_event.tx_errors_count;
es.rxerr = msg->u.leaf.error_event.rx_errors_count;
es.leaf.error_factor = msg->u.leaf.error_event.error_factor;
es.channel = cmd->u.leaf.error_event.channel;
es.status = cmd->u.leaf.error_event.status;
es.txerr = cmd->u.leaf.error_event.tx_errors_count;
es.rxerr = cmd->u.leaf.error_event.rx_errors_count;
es.leaf.error_factor = cmd->u.leaf.error_event.error_factor;
break;
case CMD_LEAF_LOG_MESSAGE:
es.channel = msg->u.leaf.log_message.channel;
es.status = msg->u.leaf.log_message.data[0];
es.txerr = msg->u.leaf.log_message.data[2];
es.rxerr = msg->u.leaf.log_message.data[3];
es.leaf.error_factor = msg->u.leaf.log_message.data[1];
es.channel = cmd->u.leaf.log_message.channel;
es.status = cmd->u.leaf.log_message.data[0];
es.txerr = cmd->u.leaf.log_message.data[2];
es.rxerr = cmd->u.leaf.log_message.data[3];
es.leaf.error_factor = cmd->u.leaf.log_message.data[1];
break;
case CMD_CHIP_STATE_EVENT:
es.channel = msg->u.leaf.chip_state_event.channel;
es.status = msg->u.leaf.chip_state_event.status;
es.txerr = msg->u.leaf.chip_state_event.tx_errors_count;
es.rxerr = msg->u.leaf.chip_state_event.rx_errors_count;
es.channel = cmd->u.leaf.chip_state_event.channel;
es.status = cmd->u.leaf.chip_state_event.status;
es.txerr = cmd->u.leaf.chip_state_event.tx_errors_count;
es.rxerr = cmd->u.leaf.chip_state_event.rx_errors_count;
es.leaf.error_factor = 0;
break;
default:
dev_err(dev->udev->dev.parent, "Invalid msg id (%d)\n",
msg->id);
dev_err(&dev->intf->dev, "Invalid cmd id (%d)\n", cmd->id);
return;
}
kvaser_usb_rx_error(dev, &es);
kvaser_usb_leaf_rx_error(dev, &es);
}
static void kvaser_usb_rx_can_err(const struct kvaser_usb_net_priv *priv,
const struct kvaser_msg *msg)
static void kvaser_usb_leaf_rx_can_err(const struct kvaser_usb_net_priv *priv,
const struct kvaser_cmd *cmd)
{
struct can_frame *cf;
struct sk_buff *skb;
struct net_device_stats *stats = &priv->netdev->stats;
if (msg->u.rx_can_header.flag & (MSG_FLAG_ERROR_FRAME |
if (cmd->u.rx_can_header.flag & (MSG_FLAG_ERROR_FRAME |
MSG_FLAG_NERR)) {
struct net_device_stats *stats = &priv->netdev->stats;
netdev_err(priv->netdev, "Unknown error (flags: 0x%02x)\n",
msg->u.rx_can_header.flag);
cmd->u.rx_can_header.flag);
stats->rx_errors++;
return;
}
if (msg->u.rx_can_header.flag & MSG_FLAG_OVERRUN) {
stats->rx_over_errors++;
stats->rx_errors++;
skb = alloc_can_err_skb(priv->netdev, &cf);
if (!skb) {
stats->rx_dropped++;
return;
}
cf->can_id |= CAN_ERR_CRTL;
cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
stats->rx_packets++;
stats->rx_bytes += cf->can_dlc;
netif_rx(skb);
}
if (cmd->u.rx_can_header.flag & MSG_FLAG_OVERRUN)
kvaser_usb_can_rx_over_error(priv->netdev);
}
static void kvaser_usb_rx_can_msg(const struct kvaser_usb *dev,
const struct kvaser_msg *msg)
static void kvaser_usb_leaf_rx_can_msg(const struct kvaser_usb *dev,
const struct kvaser_cmd *cmd)
{
struct kvaser_usb_net_priv *priv;
struct can_frame *cf;
struct sk_buff *skb;
struct net_device_stats *stats;
u8 channel = msg->u.rx_can_header.channel;
const u8 *rx_msg = NULL; /* GCC */
u8 channel = cmd->u.rx_can_header.channel;
const u8 *rx_data = NULL; /* GCC */
if (channel >= dev->nchannels) {
dev_err(dev->udev->dev.parent,
dev_err(&dev->intf->dev,
"Invalid channel number (%d)\n", channel);
return;
}
......@@ -1152,28 +938,29 @@ static void kvaser_usb_rx_can_msg(const struct kvaser_usb *dev,
priv = dev->nets[channel];
stats = &priv->netdev->stats;
if ((msg->u.rx_can_header.flag & MSG_FLAG_ERROR_FRAME) &&
(dev->family == KVASER_LEAF && msg->id == CMD_LEAF_LOG_MESSAGE)) {
kvaser_leaf_rx_error(dev, msg);
if ((cmd->u.rx_can_header.flag & MSG_FLAG_ERROR_FRAME) &&
(dev->card_data.leaf.family == KVASER_LEAF &&
cmd->id == CMD_LEAF_LOG_MESSAGE)) {
kvaser_usb_leaf_leaf_rx_error(dev, cmd);
return;
} else if (msg->u.rx_can_header.flag & (MSG_FLAG_ERROR_FRAME |
} else if (cmd->u.rx_can_header.flag & (MSG_FLAG_ERROR_FRAME |
MSG_FLAG_NERR |
MSG_FLAG_OVERRUN)) {
kvaser_usb_rx_can_err(priv, msg);
kvaser_usb_leaf_rx_can_err(priv, cmd);
return;
} else if (msg->u.rx_can_header.flag & ~MSG_FLAG_REMOTE_FRAME) {
} else if (cmd->u.rx_can_header.flag & ~MSG_FLAG_REMOTE_FRAME) {
netdev_warn(priv->netdev,
"Unhandled frame (flags: 0x%02x)",
msg->u.rx_can_header.flag);
"Unhandled frame (flags: 0x%02x)\n",
cmd->u.rx_can_header.flag);
return;
}
switch (dev->family) {
switch (dev->card_data.leaf.family) {
case KVASER_LEAF:
rx_msg = msg->u.leaf.rx_can.msg;
rx_data = cmd->u.leaf.rx_can.data;
break;
case KVASER_USBCAN:
rx_msg = msg->u.usbcan.rx_can.msg;
rx_data = cmd->u.usbcan.rx_can.data;
break;
}
......@@ -1183,38 +970,38 @@ static void kvaser_usb_rx_can_msg(const struct kvaser_usb *dev,
return;
}
if (dev->family == KVASER_LEAF && msg->id == CMD_LEAF_LOG_MESSAGE) {
cf->can_id = le32_to_cpu(msg->u.leaf.log_message.id);
if (dev->card_data.leaf.family == KVASER_LEAF && cmd->id ==
CMD_LEAF_LOG_MESSAGE) {
cf->can_id = le32_to_cpu(cmd->u.leaf.log_message.id);
if (cf->can_id & KVASER_EXTENDED_FRAME)
cf->can_id &= CAN_EFF_MASK | CAN_EFF_FLAG;
else
cf->can_id &= CAN_SFF_MASK;
cf->can_dlc = get_can_dlc(msg->u.leaf.log_message.dlc);
cf->can_dlc = get_can_dlc(cmd->u.leaf.log_message.dlc);
if (msg->u.leaf.log_message.flags & MSG_FLAG_REMOTE_FRAME)
if (cmd->u.leaf.log_message.flags & MSG_FLAG_REMOTE_FRAME)
cf->can_id |= CAN_RTR_FLAG;
else
memcpy(cf->data, &msg->u.leaf.log_message.data,
memcpy(cf->data, &cmd->u.leaf.log_message.data,
cf->can_dlc);
} else {
cf->can_id = ((rx_msg[0] & 0x1f) << 6) | (rx_msg[1] & 0x3f);
cf->can_id = ((rx_data[0] & 0x1f) << 6) | (rx_data[1] & 0x3f);
if (msg->id == CMD_RX_EXT_MESSAGE) {
if (cmd->id == CMD_RX_EXT_MESSAGE) {
cf->can_id <<= 18;
cf->can_id |= ((rx_msg[2] & 0x0f) << 14) |
((rx_msg[3] & 0xff) << 6) |
(rx_msg[4] & 0x3f);
cf->can_id |= ((rx_data[2] & 0x0f) << 14) |
((rx_data[3] & 0xff) << 6) |
(rx_data[4] & 0x3f);
cf->can_id |= CAN_EFF_FLAG;
}
cf->can_dlc = get_can_dlc(rx_msg[5]);
cf->can_dlc = get_can_dlc(rx_data[5]);
if (msg->u.rx_can_header.flag & MSG_FLAG_REMOTE_FRAME)
if (cmd->u.rx_can_header.flag & MSG_FLAG_REMOTE_FRAME)
cf->can_id |= CAN_RTR_FLAG;
else
memcpy(cf->data, &rx_msg[6],
cf->can_dlc);
memcpy(cf->data, &rx_data[6], cf->can_dlc);
}
stats->rx_packets++;
......@@ -1222,14 +1009,14 @@ static void kvaser_usb_rx_can_msg(const struct kvaser_usb *dev,
netif_rx(skb);
}
static void kvaser_usb_start_chip_reply(const struct kvaser_usb *dev,
const struct kvaser_msg *msg)
static void kvaser_usb_leaf_start_chip_reply(const struct kvaser_usb *dev,
const struct kvaser_cmd *cmd)
{
struct kvaser_usb_net_priv *priv;
u8 channel = msg->u.simple.channel;
u8 channel = cmd->u.simple.channel;
if (channel >= dev->nchannels) {
dev_err(dev->udev->dev.parent,
dev_err(&dev->intf->dev,
"Invalid channel number (%d)\n", channel);
return;
}
......@@ -1245,14 +1032,14 @@ static void kvaser_usb_start_chip_reply(const struct kvaser_usb *dev,
}
}
static void kvaser_usb_stop_chip_reply(const struct kvaser_usb *dev,
const struct kvaser_msg *msg)
static void kvaser_usb_leaf_stop_chip_reply(const struct kvaser_usb *dev,
const struct kvaser_cmd *cmd)
{
struct kvaser_usb_net_priv *priv;
u8 channel = msg->u.simple.channel;
u8 channel = cmd->u.simple.channel;
if (channel >= dev->nchannels) {
dev_err(dev->udev->dev.parent,
dev_err(&dev->intf->dev,
"Invalid channel number (%d)\n", channel);
return;
}
......@@ -1262,84 +1049,68 @@ static void kvaser_usb_stop_chip_reply(const struct kvaser_usb *dev,
complete(&priv->stop_comp);
}
static void kvaser_usb_handle_message(const struct kvaser_usb *dev,
const struct kvaser_msg *msg)
static void kvaser_usb_leaf_handle_command(const struct kvaser_usb *dev,
const struct kvaser_cmd *cmd)
{
switch (msg->id) {
switch (cmd->id) {
case CMD_START_CHIP_REPLY:
kvaser_usb_start_chip_reply(dev, msg);
kvaser_usb_leaf_start_chip_reply(dev, cmd);
break;
case CMD_STOP_CHIP_REPLY:
kvaser_usb_stop_chip_reply(dev, msg);
kvaser_usb_leaf_stop_chip_reply(dev, cmd);
break;
case CMD_RX_STD_MESSAGE:
case CMD_RX_EXT_MESSAGE:
kvaser_usb_rx_can_msg(dev, msg);
kvaser_usb_leaf_rx_can_msg(dev, cmd);
break;
case CMD_LEAF_LOG_MESSAGE:
if (dev->family != KVASER_LEAF)
if (dev->card_data.leaf.family != KVASER_LEAF)
goto warn;
kvaser_usb_rx_can_msg(dev, msg);
kvaser_usb_leaf_rx_can_msg(dev, cmd);
break;
case CMD_CHIP_STATE_EVENT:
case CMD_CAN_ERROR_EVENT:
if (dev->family == KVASER_LEAF)
kvaser_leaf_rx_error(dev, msg);
if (dev->card_data.leaf.family == KVASER_LEAF)
kvaser_usb_leaf_leaf_rx_error(dev, cmd);
else
kvaser_usbcan_rx_error(dev, msg);
kvaser_usb_leaf_usbcan_rx_error(dev, cmd);
break;
case CMD_TX_ACKNOWLEDGE:
kvaser_usb_tx_acknowledge(dev, msg);
kvaser_usb_leaf_tx_acknowledge(dev, cmd);
break;
/* Ignored messages */
/* Ignored commands */
case CMD_USBCAN_CLOCK_OVERFLOW_EVENT:
if (dev->family != KVASER_USBCAN)
if (dev->card_data.leaf.family != KVASER_USBCAN)
goto warn;
break;
case CMD_FLUSH_QUEUE_REPLY:
if (dev->family != KVASER_LEAF)
if (dev->card_data.leaf.family != KVASER_LEAF)
goto warn;
break;
default:
warn: dev_warn(dev->udev->dev.parent,
"Unhandled message (%d)\n", msg->id);
warn: dev_warn(&dev->intf->dev, "Unhandled command (%d)\n", cmd->id);
break;
}
}
static void kvaser_usb_read_bulk_callback(struct urb *urb)
static void kvaser_usb_leaf_read_bulk_callback(struct kvaser_usb *dev,
void *buf, int len)
{
struct kvaser_usb *dev = urb->context;
struct kvaser_msg *msg;
struct kvaser_cmd *cmd;
int pos = 0;
int err, i;
switch (urb->status) {
case 0:
break;
case -ENOENT:
case -EPIPE:
case -EPROTO:
case -ESHUTDOWN:
return;
default:
dev_info(dev->udev->dev.parent, "Rx URB aborted (%d)\n",
urb->status);
goto resubmit_urb;
}
while (pos <= (int)(urb->actual_length - MSG_HEADER_LEN)) {
msg = urb->transfer_buffer + pos;
while (pos <= len - CMD_HEADER_LEN) {
cmd = buf + pos;
/* The Kvaser firmware can only read and write messages that
/* The Kvaser firmware can only read and write commands that
* does not cross the USB's endpoint wMaxPacketSize boundary.
* If a follow-up command crosses such boundary, firmware puts
* a placeholder zero-length command in its place then aligns
......@@ -1348,457 +1119,119 @@ static void kvaser_usb_read_bulk_callback(struct urb *urb)
* Handle such cases or we're going to miss a significant
* number of events in case of a heavy rx load on the bus.
*/
if (msg->len == 0) {
pos = round_up(pos, le16_to_cpu(dev->bulk_in->
wMaxPacketSize));
if (cmd->len == 0) {
pos = round_up(pos, le16_to_cpu
(dev->bulk_in->wMaxPacketSize));
continue;
}
if (pos + msg->len > urb->actual_length) {
dev_err_ratelimited(dev->udev->dev.parent,
"Format error\n");
if (pos + cmd->len > len) {
dev_err_ratelimited(&dev->intf->dev, "Format error\n");
break;
}
kvaser_usb_handle_message(dev, msg);
pos += msg->len;
}
resubmit_urb:
usb_fill_bulk_urb(urb, dev->udev,
usb_rcvbulkpipe(dev->udev,
dev->bulk_in->bEndpointAddress),
urb->transfer_buffer, RX_BUFFER_SIZE,
kvaser_usb_read_bulk_callback, dev);
err = usb_submit_urb(urb, GFP_ATOMIC);
if (err == -ENODEV) {
for (i = 0; i < dev->nchannels; i++) {
if (!dev->nets[i])
continue;
netif_device_detach(dev->nets[i]->netdev);
}
} else if (err) {
dev_err(dev->udev->dev.parent,
"Failed resubmitting read bulk urb: %d\n", err);
kvaser_usb_leaf_handle_command(dev, cmd);
pos += cmd->len;
}
return;
}
static int kvaser_usb_setup_rx_urbs(struct kvaser_usb *dev)
static int kvaser_usb_leaf_set_opt_mode(const struct kvaser_usb_net_priv *priv)
{
int i, err = 0;
if (dev->rxinitdone)
return 0;
for (i = 0; i < MAX_RX_URBS; i++) {
struct urb *urb = NULL;
u8 *buf = NULL;
dma_addr_t buf_dma;
urb = usb_alloc_urb(0, GFP_KERNEL);
if (!urb) {
err = -ENOMEM;
break;
}
buf = usb_alloc_coherent(dev->udev, RX_BUFFER_SIZE,
GFP_KERNEL, &buf_dma);
if (!buf) {
dev_warn(dev->udev->dev.parent,
"No memory left for USB buffer\n");
usb_free_urb(urb);
err = -ENOMEM;
break;
}
usb_fill_bulk_urb(urb, dev->udev,
usb_rcvbulkpipe(dev->udev,
dev->bulk_in->bEndpointAddress),
buf, RX_BUFFER_SIZE,
kvaser_usb_read_bulk_callback,
dev);
urb->transfer_dma = buf_dma;
urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
usb_anchor_urb(urb, &dev->rx_submitted);
err = usb_submit_urb(urb, GFP_KERNEL);
if (err) {
usb_unanchor_urb(urb);
usb_free_coherent(dev->udev, RX_BUFFER_SIZE, buf,
buf_dma);
usb_free_urb(urb);
break;
}
dev->rxbuf[i] = buf;
dev->rxbuf_dma[i] = buf_dma;
usb_free_urb(urb);
}
if (i == 0) {
dev_warn(dev->udev->dev.parent,
"Cannot setup read URBs, error %d\n", err);
return err;
} else if (i < MAX_RX_URBS) {
dev_warn(dev->udev->dev.parent,
"RX performances may be slow\n");
}
dev->rxinitdone = true;
return 0;
}
static int kvaser_usb_set_opt_mode(const struct kvaser_usb_net_priv *priv)
{
struct kvaser_msg *msg;
struct kvaser_cmd *cmd;
int rc;
msg = kmalloc(sizeof(*msg), GFP_KERNEL);
if (!msg)
cmd = kmalloc(sizeof(*cmd), GFP_KERNEL);
if (!cmd)
return -ENOMEM;
msg->id = CMD_SET_CTRL_MODE;
msg->len = MSG_HEADER_LEN + sizeof(struct kvaser_msg_ctrl_mode);
msg->u.ctrl_mode.tid = 0xff;
msg->u.ctrl_mode.channel = priv->channel;
cmd->id = CMD_SET_CTRL_MODE;
cmd->len = CMD_HEADER_LEN + sizeof(struct kvaser_cmd_ctrl_mode);
cmd->u.ctrl_mode.tid = 0xff;
cmd->u.ctrl_mode.channel = priv->channel;
if (priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY)
msg->u.ctrl_mode.ctrl_mode = KVASER_CTRL_MODE_SILENT;
cmd->u.ctrl_mode.ctrl_mode = KVASER_CTRL_MODE_SILENT;
else
msg->u.ctrl_mode.ctrl_mode = KVASER_CTRL_MODE_NORMAL;
cmd->u.ctrl_mode.ctrl_mode = KVASER_CTRL_MODE_NORMAL;
rc = kvaser_usb_send_msg(priv->dev, msg);
rc = kvaser_usb_send_cmd(priv->dev, cmd, cmd->len);
kfree(msg);
kfree(cmd);
return rc;
}
static int kvaser_usb_start_chip(struct kvaser_usb_net_priv *priv)
static int kvaser_usb_leaf_start_chip(struct kvaser_usb_net_priv *priv)
{
int err;
init_completion(&priv->start_comp);
err = kvaser_usb_send_simple_msg(priv->dev, CMD_START_CHIP,
priv->channel);
err = kvaser_usb_leaf_send_simple_cmd(priv->dev, CMD_START_CHIP,
priv->channel);
if (err)
return err;
if (!wait_for_completion_timeout(&priv->start_comp,
msecs_to_jiffies(START_TIMEOUT)))
msecs_to_jiffies(KVASER_USB_TIMEOUT)))
return -ETIMEDOUT;
return 0;
}
static int kvaser_usb_open(struct net_device *netdev)
{
struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
struct kvaser_usb *dev = priv->dev;
int err;
err = open_candev(netdev);
if (err)
return err;
err = kvaser_usb_setup_rx_urbs(dev);
if (err)
goto error;
err = kvaser_usb_set_opt_mode(priv);
if (err)
goto error;
err = kvaser_usb_start_chip(priv);
if (err) {
netdev_warn(netdev, "Cannot start device, error %d\n", err);
goto error;
}
priv->can.state = CAN_STATE_ERROR_ACTIVE;
return 0;
error:
close_candev(netdev);
return err;
}
static void kvaser_usb_reset_tx_urb_contexts(struct kvaser_usb_net_priv *priv)
{
int i, max_tx_urbs;
max_tx_urbs = priv->dev->max_tx_urbs;
priv->active_tx_contexts = 0;
for (i = 0; i < max_tx_urbs; i++)
priv->tx_contexts[i].echo_index = max_tx_urbs;
}
/* This method might sleep. Do not call it in the atomic context
* of URB completions.
*/
static void kvaser_usb_unlink_tx_urbs(struct kvaser_usb_net_priv *priv)
{
usb_kill_anchored_urbs(&priv->tx_submitted);
kvaser_usb_reset_tx_urb_contexts(priv);
}
static void kvaser_usb_unlink_all_urbs(struct kvaser_usb *dev)
{
int i;
usb_kill_anchored_urbs(&dev->rx_submitted);
for (i = 0; i < MAX_RX_URBS; i++)
usb_free_coherent(dev->udev, RX_BUFFER_SIZE,
dev->rxbuf[i],
dev->rxbuf_dma[i]);
for (i = 0; i < dev->nchannels; i++) {
struct kvaser_usb_net_priv *priv = dev->nets[i];
if (priv)
kvaser_usb_unlink_tx_urbs(priv);
}
}
static int kvaser_usb_stop_chip(struct kvaser_usb_net_priv *priv)
static int kvaser_usb_leaf_stop_chip(struct kvaser_usb_net_priv *priv)
{
int err;
init_completion(&priv->stop_comp);
err = kvaser_usb_send_simple_msg(priv->dev, CMD_STOP_CHIP,
priv->channel);
err = kvaser_usb_leaf_send_simple_cmd(priv->dev, CMD_STOP_CHIP,
priv->channel);
if (err)
return err;
if (!wait_for_completion_timeout(&priv->stop_comp,
msecs_to_jiffies(STOP_TIMEOUT)))
msecs_to_jiffies(KVASER_USB_TIMEOUT)))
return -ETIMEDOUT;
return 0;
}
static int kvaser_usb_flush_queue(struct kvaser_usb_net_priv *priv)
static int kvaser_usb_leaf_reset_chip(struct kvaser_usb *dev, int channel)
{
return kvaser_usb_leaf_send_simple_cmd(dev, CMD_RESET_CHIP, channel);
}
static int kvaser_usb_leaf_flush_queue(struct kvaser_usb_net_priv *priv)
{
struct kvaser_msg *msg;
struct kvaser_cmd *cmd;
int rc;
msg = kmalloc(sizeof(*msg), GFP_KERNEL);
if (!msg)
cmd = kmalloc(sizeof(*cmd), GFP_KERNEL);
if (!cmd)
return -ENOMEM;
msg->id = CMD_FLUSH_QUEUE;
msg->len = MSG_HEADER_LEN + sizeof(struct kvaser_msg_flush_queue);
msg->u.flush_queue.channel = priv->channel;
msg->u.flush_queue.flags = 0x00;
cmd->id = CMD_FLUSH_QUEUE;
cmd->len = CMD_HEADER_LEN + sizeof(struct kvaser_cmd_flush_queue);
cmd->u.flush_queue.channel = priv->channel;
cmd->u.flush_queue.flags = 0x00;
rc = kvaser_usb_send_msg(priv->dev, msg);
rc = kvaser_usb_send_cmd(priv->dev, cmd, cmd->len);
kfree(msg);
kfree(cmd);
return rc;
}
static int kvaser_usb_close(struct net_device *netdev)
static int kvaser_usb_leaf_init_card(struct kvaser_usb *dev)
{
struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
struct kvaser_usb *dev = priv->dev;
int err;
netif_stop_queue(netdev);
err = kvaser_usb_flush_queue(priv);
if (err)
netdev_warn(netdev, "Cannot flush queue, error %d\n", err);
err = kvaser_usb_send_simple_msg(dev, CMD_RESET_CHIP, priv->channel);
if (err)
netdev_warn(netdev, "Cannot reset card, error %d\n", err);
err = kvaser_usb_stop_chip(priv);
if (err)
netdev_warn(netdev, "Cannot stop device, error %d\n", err);
/* reset tx contexts */
kvaser_usb_unlink_tx_urbs(priv);
struct kvaser_usb_dev_card_data *card_data = &dev->card_data;
priv->can.state = CAN_STATE_STOPPED;
close_candev(priv->netdev);
dev->cfg = &kvaser_usb_leaf_dev_cfg;
card_data->ctrlmode_supported |= CAN_CTRLMODE_3_SAMPLES;
return 0;
}
static void kvaser_usb_write_bulk_callback(struct urb *urb)
{
struct kvaser_usb_tx_urb_context *context = urb->context;
struct kvaser_usb_net_priv *priv;
struct net_device *netdev;
if (WARN_ON(!context))
return;
priv = context->priv;
netdev = priv->netdev;
kfree(urb->transfer_buffer);
if (!netif_device_present(netdev))
return;
if (urb->status)
netdev_info(netdev, "Tx URB aborted (%d)\n", urb->status);
}
static netdev_tx_t kvaser_usb_start_xmit(struct sk_buff *skb,
struct net_device *netdev)
{
struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
struct kvaser_usb *dev = priv->dev;
struct net_device_stats *stats = &netdev->stats;
struct can_frame *cf = (struct can_frame *)skb->data;
struct kvaser_usb_tx_urb_context *context = NULL;
struct urb *urb;
void *buf;
struct kvaser_msg *msg;
int i, err, ret = NETDEV_TX_OK;
u8 *msg_tx_can_flags = NULL; /* GCC */
unsigned long flags;
if (can_dropped_invalid_skb(netdev, skb))
return NETDEV_TX_OK;
urb = usb_alloc_urb(0, GFP_ATOMIC);
if (!urb) {
stats->tx_dropped++;
dev_kfree_skb(skb);
return NETDEV_TX_OK;
}
buf = kmalloc(sizeof(struct kvaser_msg), GFP_ATOMIC);
if (!buf) {
stats->tx_dropped++;
dev_kfree_skb(skb);
goto freeurb;
}
msg = buf;
msg->len = MSG_HEADER_LEN + sizeof(struct kvaser_msg_tx_can);
msg->u.tx_can.channel = priv->channel;
switch (dev->family) {
case KVASER_LEAF:
msg_tx_can_flags = &msg->u.tx_can.leaf.flags;
break;
case KVASER_USBCAN:
msg_tx_can_flags = &msg->u.tx_can.usbcan.flags;
break;
}
*msg_tx_can_flags = 0;
if (cf->can_id & CAN_EFF_FLAG) {
msg->id = CMD_TX_EXT_MESSAGE;
msg->u.tx_can.msg[0] = (cf->can_id >> 24) & 0x1f;
msg->u.tx_can.msg[1] = (cf->can_id >> 18) & 0x3f;
msg->u.tx_can.msg[2] = (cf->can_id >> 14) & 0x0f;
msg->u.tx_can.msg[3] = (cf->can_id >> 6) & 0xff;
msg->u.tx_can.msg[4] = cf->can_id & 0x3f;
} else {
msg->id = CMD_TX_STD_MESSAGE;
msg->u.tx_can.msg[0] = (cf->can_id >> 6) & 0x1f;
msg->u.tx_can.msg[1] = cf->can_id & 0x3f;
}
msg->u.tx_can.msg[5] = cf->can_dlc;
memcpy(&msg->u.tx_can.msg[6], cf->data, cf->can_dlc);
if (cf->can_id & CAN_RTR_FLAG)
*msg_tx_can_flags |= MSG_FLAG_REMOTE_FRAME;
spin_lock_irqsave(&priv->tx_contexts_lock, flags);
for (i = 0; i < dev->max_tx_urbs; i++) {
if (priv->tx_contexts[i].echo_index == dev->max_tx_urbs) {
context = &priv->tx_contexts[i];
context->echo_index = i;
can_put_echo_skb(skb, netdev, context->echo_index);
++priv->active_tx_contexts;
if (priv->active_tx_contexts >= dev->max_tx_urbs)
netif_stop_queue(netdev);
break;
}
}
spin_unlock_irqrestore(&priv->tx_contexts_lock, flags);
/* This should never happen; it implies a flow control bug */
if (!context) {
netdev_warn(netdev, "cannot find free context\n");
kfree(buf);
ret = NETDEV_TX_BUSY;
goto freeurb;
}
context->priv = priv;
context->dlc = cf->can_dlc;
msg->u.tx_can.tid = context->echo_index;
usb_fill_bulk_urb(urb, dev->udev,
usb_sndbulkpipe(dev->udev,
dev->bulk_out->bEndpointAddress),
buf, msg->len,
kvaser_usb_write_bulk_callback, context);
usb_anchor_urb(urb, &priv->tx_submitted);
err = usb_submit_urb(urb, GFP_ATOMIC);
if (unlikely(err)) {
spin_lock_irqsave(&priv->tx_contexts_lock, flags);
can_free_echo_skb(netdev, context->echo_index);
context->echo_index = dev->max_tx_urbs;
--priv->active_tx_contexts;
netif_wake_queue(netdev);
spin_unlock_irqrestore(&priv->tx_contexts_lock, flags);
usb_unanchor_urb(urb);
kfree(buf);
stats->tx_dropped++;
if (err == -ENODEV)
netif_device_detach(netdev);
else
netdev_warn(netdev, "Failed tx_urb %d\n", err);
goto freeurb;
}
ret = NETDEV_TX_OK;
freeurb:
usb_free_urb(urb);
return ret;
}
static const struct net_device_ops kvaser_usb_netdev_ops = {
.ndo_open = kvaser_usb_open,
.ndo_stop = kvaser_usb_close,
.ndo_start_xmit = kvaser_usb_start_xmit,
.ndo_change_mtu = can_change_mtu,
};
static const struct can_bittiming_const kvaser_usb_bittiming_const = {
static const struct can_bittiming_const kvaser_usb_leaf_bittiming_const = {
.name = "kvaser_usb",
.tseg1_min = KVASER_USB_TSEG1_MIN,
.tseg1_max = KVASER_USB_TSEG1_MAX,
......@@ -1810,47 +1243,47 @@ static const struct can_bittiming_const kvaser_usb_bittiming_const = {
.brp_inc = KVASER_USB_BRP_INC,
};
static int kvaser_usb_set_bittiming(struct net_device *netdev)
static int kvaser_usb_leaf_set_bittiming(struct net_device *netdev)
{
struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
struct can_bittiming *bt = &priv->can.bittiming;
struct kvaser_usb *dev = priv->dev;
struct kvaser_msg *msg;
struct kvaser_cmd *cmd;
int rc;
msg = kmalloc(sizeof(*msg), GFP_KERNEL);
if (!msg)
cmd = kmalloc(sizeof(*cmd), GFP_KERNEL);
if (!cmd)
return -ENOMEM;
msg->id = CMD_SET_BUS_PARAMS;
msg->len = MSG_HEADER_LEN + sizeof(struct kvaser_msg_busparams);
msg->u.busparams.channel = priv->channel;
msg->u.busparams.tid = 0xff;
msg->u.busparams.bitrate = cpu_to_le32(bt->bitrate);
msg->u.busparams.sjw = bt->sjw;
msg->u.busparams.tseg1 = bt->prop_seg + bt->phase_seg1;
msg->u.busparams.tseg2 = bt->phase_seg2;
cmd->id = CMD_SET_BUS_PARAMS;
cmd->len = CMD_HEADER_LEN + sizeof(struct kvaser_cmd_busparams);
cmd->u.busparams.channel = priv->channel;
cmd->u.busparams.tid = 0xff;
cmd->u.busparams.bitrate = cpu_to_le32(bt->bitrate);
cmd->u.busparams.sjw = bt->sjw;
cmd->u.busparams.tseg1 = bt->prop_seg + bt->phase_seg1;
cmd->u.busparams.tseg2 = bt->phase_seg2;
if (priv->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES)
msg->u.busparams.no_samp = 3;
cmd->u.busparams.no_samp = 3;
else
msg->u.busparams.no_samp = 1;
cmd->u.busparams.no_samp = 1;
rc = kvaser_usb_send_msg(dev, msg);
rc = kvaser_usb_send_cmd(dev, cmd, cmd->len);
kfree(msg);
kfree(cmd);
return rc;
}
static int kvaser_usb_set_mode(struct net_device *netdev,
enum can_mode mode)
static int kvaser_usb_leaf_set_mode(struct net_device *netdev,
enum can_mode mode)
{
struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
int err;
switch (mode) {
case CAN_MODE_START:
err = kvaser_usb_simple_msg_async(priv, CMD_START_CHIP);
err = kvaser_usb_leaf_simple_cmd_async(priv, CMD_START_CHIP);
if (err)
return err;
break;
......@@ -1861,8 +1294,8 @@ static int kvaser_usb_set_mode(struct net_device *netdev,
return 0;
}
static int kvaser_usb_get_berr_counter(const struct net_device *netdev,
struct can_berr_counter *bec)
static int kvaser_usb_leaf_get_berr_counter(const struct net_device *netdev,
struct can_berr_counter *bec)
{
struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
......@@ -1871,215 +1304,55 @@ static int kvaser_usb_get_berr_counter(const struct net_device *netdev,
return 0;
}
static void kvaser_usb_remove_interfaces(struct kvaser_usb *dev)
{
int i;
for (i = 0; i < dev->nchannels; i++) {
if (!dev->nets[i])
continue;
unregister_candev(dev->nets[i]->netdev);
}
kvaser_usb_unlink_all_urbs(dev);
for (i = 0; i < dev->nchannels; i++) {
if (!dev->nets[i])
continue;
free_candev(dev->nets[i]->netdev);
}
}
static int kvaser_usb_init_one(struct usb_interface *intf,
const struct usb_device_id *id, int channel)
{
struct kvaser_usb *dev = usb_get_intfdata(intf);
struct net_device *netdev;
struct kvaser_usb_net_priv *priv;
int err;
err = kvaser_usb_send_simple_msg(dev, CMD_RESET_CHIP, channel);
if (err)
return err;
netdev = alloc_candev(sizeof(*priv) +
dev->max_tx_urbs * sizeof(*priv->tx_contexts),
dev->max_tx_urbs);
if (!netdev) {
dev_err(&intf->dev, "Cannot alloc candev\n");
return -ENOMEM;
}
priv = netdev_priv(netdev);
init_usb_anchor(&priv->tx_submitted);
init_completion(&priv->start_comp);
init_completion(&priv->stop_comp);
priv->dev = dev;
priv->netdev = netdev;
priv->channel = channel;
spin_lock_init(&priv->tx_contexts_lock);
kvaser_usb_reset_tx_urb_contexts(priv);
priv->can.state = CAN_STATE_STOPPED;
priv->can.clock.freq = CAN_USB_CLOCK;
priv->can.bittiming_const = &kvaser_usb_bittiming_const;
priv->can.do_set_bittiming = kvaser_usb_set_bittiming;
priv->can.do_set_mode = kvaser_usb_set_mode;
if (id->driver_info & KVASER_HAS_TXRX_ERRORS)
priv->can.do_get_berr_counter = kvaser_usb_get_berr_counter;
priv->can.ctrlmode_supported = CAN_CTRLMODE_3_SAMPLES;
if (id->driver_info & KVASER_HAS_SILENT_MODE)
priv->can.ctrlmode_supported |= CAN_CTRLMODE_LISTENONLY;
netdev->flags |= IFF_ECHO;
netdev->netdev_ops = &kvaser_usb_netdev_ops;
SET_NETDEV_DEV(netdev, &intf->dev);
netdev->dev_id = channel;
dev->nets[channel] = priv;
err = register_candev(netdev);
if (err) {
dev_err(&intf->dev, "Failed to register can device\n");
free_candev(netdev);
dev->nets[channel] = NULL;
return err;
}
netdev_dbg(netdev, "device registered\n");
return 0;
}
static int kvaser_usb_get_endpoints(const struct usb_interface *intf,
struct usb_endpoint_descriptor **in,
struct usb_endpoint_descriptor **out)
static int kvaser_usb_leaf_setup_endpoints(struct kvaser_usb *dev)
{
const struct usb_host_interface *iface_desc;
struct usb_endpoint_descriptor *endpoint;
int i;
iface_desc = &intf->altsetting[0];
iface_desc = &dev->intf->altsetting[0];
for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) {
endpoint = &iface_desc->endpoint[i].desc;
if (!*in && usb_endpoint_is_bulk_in(endpoint))
*in = endpoint;
if (!dev->bulk_in && usb_endpoint_is_bulk_in(endpoint))
dev->bulk_in = endpoint;
if (!*out && usb_endpoint_is_bulk_out(endpoint))
*out = endpoint;
if (!dev->bulk_out && usb_endpoint_is_bulk_out(endpoint))
dev->bulk_out = endpoint;
/* use first bulk endpoint for in and out */
if (*in && *out)
if (dev->bulk_in && dev->bulk_out)
return 0;
}
return -ENODEV;
}
static int kvaser_usb_probe(struct usb_interface *intf,
const struct usb_device_id *id)
{
struct kvaser_usb *dev;
int err = -ENOMEM;
int i, retry = 3;
dev = devm_kzalloc(&intf->dev, sizeof(*dev), GFP_KERNEL);
if (!dev)
return -ENOMEM;
if (kvaser_is_leaf(id)) {
dev->family = KVASER_LEAF;
} else if (kvaser_is_usbcan(id)) {
dev->family = KVASER_USBCAN;
} else {
dev_err(&intf->dev,
"Product ID (%d) does not belong to any known Kvaser USB family",
id->idProduct);
return -ENODEV;
}
err = kvaser_usb_get_endpoints(intf, &dev->bulk_in, &dev->bulk_out);
if (err) {
dev_err(&intf->dev, "Cannot get usb endpoint(s)");
return err;
}
dev->udev = interface_to_usbdev(intf);
init_usb_anchor(&dev->rx_submitted);
usb_set_intfdata(intf, dev);
/* On some x86 laptops, plugging a Kvaser device again after
* an unplug makes the firmware always ignore the very first
* command. For such a case, provide some room for retries
* instead of completely exiting the driver.
*/
do {
err = kvaser_usb_get_software_info(dev);
} while (--retry && err == -ETIMEDOUT);
if (err) {
dev_err(&intf->dev,
"Cannot get software infos, error %d\n", err);
return err;
}
dev_dbg(&intf->dev, "Firmware version: %d.%d.%d\n",
((dev->fw_version >> 24) & 0xff),
((dev->fw_version >> 16) & 0xff),
(dev->fw_version & 0xffff));
dev_dbg(&intf->dev, "Max outstanding tx = %d URBs\n", dev->max_tx_urbs);
err = kvaser_usb_get_card_info(dev);
if (err) {
dev_err(&intf->dev,
"Cannot get card infos, error %d\n", err);
return err;
}
for (i = 0; i < dev->nchannels; i++) {
err = kvaser_usb_init_one(intf, id, i);
if (err) {
kvaser_usb_remove_interfaces(dev);
return err;
}
}
return 0;
}
static void kvaser_usb_disconnect(struct usb_interface *intf)
{
struct kvaser_usb *dev = usb_get_intfdata(intf);
usb_set_intfdata(intf, NULL);
if (!dev)
return;
kvaser_usb_remove_interfaces(dev);
}
static struct usb_driver kvaser_usb_driver = {
.name = "kvaser_usb",
.probe = kvaser_usb_probe,
.disconnect = kvaser_usb_disconnect,
.id_table = kvaser_usb_table,
const struct kvaser_usb_dev_ops kvaser_usb_leaf_dev_ops = {
.dev_set_mode = kvaser_usb_leaf_set_mode,
.dev_set_bittiming = kvaser_usb_leaf_set_bittiming,
.dev_set_data_bittiming = NULL,
.dev_get_berr_counter = kvaser_usb_leaf_get_berr_counter,
.dev_setup_endpoints = kvaser_usb_leaf_setup_endpoints,
.dev_init_card = kvaser_usb_leaf_init_card,
.dev_get_software_info = kvaser_usb_leaf_get_software_info,
.dev_get_software_details = NULL,
.dev_get_card_info = kvaser_usb_leaf_get_card_info,
.dev_get_capabilities = NULL,
.dev_set_opt_mode = kvaser_usb_leaf_set_opt_mode,
.dev_start_chip = kvaser_usb_leaf_start_chip,
.dev_stop_chip = kvaser_usb_leaf_stop_chip,
.dev_reset_chip = kvaser_usb_leaf_reset_chip,
.dev_flush_queue = kvaser_usb_leaf_flush_queue,
.dev_read_bulk_callback = kvaser_usb_leaf_read_bulk_callback,
.dev_frame_to_cmd = kvaser_usb_leaf_frame_to_cmd,
};
module_usb_driver(kvaser_usb_driver);
MODULE_AUTHOR("Olivier Sobrie <olivier@sobrie.be>");
MODULE_DESCRIPTION("CAN driver for Kvaser CAN/USB devices");
MODULE_LICENSE("GPL v2");
static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg = {
.clock = {
.freq = CAN_USB_CLOCK,
},
.timestamp_freq = 1,
.bittiming_const = &kvaser_usb_leaf_bittiming_const,
};
......@@ -423,6 +423,7 @@ static int pcan_usb_decode_error(struct pcan_usb_msg_context *mc, u8 n,
new_state = CAN_STATE_ERROR_WARNING;
break;
}
/* else: fall through */
case CAN_STATE_ERROR_WARNING:
if (n & PCAN_USB_ERROR_BUS_HEAVY) {
......
......@@ -353,6 +353,7 @@ static netdev_tx_t peak_usb_ndo_start_xmit(struct sk_buff *skb,
default:
netdev_warn(netdev, "tx urb submitting failed err=%d\n",
err);
/* fall through */
case -ENOENT:
/* cable unplugged */
stats->tx_dropped++;
......
......@@ -141,8 +141,10 @@ static int pcan_msg_add_rec(struct pcan_usb_pro_msg *pm, u8 id, ...)
switch (id) {
case PCAN_USBPRO_TXMSG8:
i += 4;
/* fall through */
case PCAN_USBPRO_TXMSG4:
i += 4;
/* fall through */
case PCAN_USBPRO_TXMSG0:
*pc++ = va_arg(ap, int);
*pc++ = va_arg(ap, int);
......
// SPDX-License-Identifier: GPL-2.0
/* Driver for Theobroma Systems UCAN devices, Protocol Version 3
*
* Copyright (C) 2018 Theobroma Systems Design und Consulting GmbH
*
*
* General Description:
*
* The USB Device uses three Endpoints:
*
* CONTROL Endpoint: Is used the setup the device (start, stop,
* info, configure).
*
* IN Endpoint: The device sends CAN Frame Messages and Device
* Information using the IN endpoint.
*
* OUT Endpoint: The driver sends configuration requests, and CAN
* Frames on the out endpoint.
*
* Error Handling:
*
* If error reporting is turned on the device encodes error into CAN
* error frames (see uapi/linux/can/error.h) and sends it using the
* IN Endpoint. The driver updates statistics and forward it.
*/
#include <linux/can.h>
#include <linux/can/dev.h>
#include <linux/can/error.h>
#include <linux/module.h>
#include <linux/netdevice.h>
#include <linux/signal.h>
#include <linux/skbuff.h>
#include <linux/slab.h>
#include <linux/usb.h>
#include <linux/can.h>
#include <linux/can/dev.h>
#include <linux/can/error.h>
#define UCAN_DRIVER_NAME "ucan"
#define UCAN_MAX_RX_URBS 8
/* the CAN controller needs a while to enable/disable the bus */
#define UCAN_USB_CTL_PIPE_TIMEOUT 1000
/* this driver currently supports protocol version 3 only */
#define UCAN_PROTOCOL_VERSION_MIN 3
#define UCAN_PROTOCOL_VERSION_MAX 3
/* UCAN Message Definitions
* ------------------------
*
* ucan_message_out_t and ucan_message_in_t define the messages
* transmitted on the OUT and IN endpoint.
*
* Multibyte fields are transmitted with little endianness
*
* INTR Endpoint: a single uint32_t storing the current space in the fifo
*
* OUT Endpoint: single message of type ucan_message_out_t is
* transmitted on the out endpoint
*
* IN Endpoint: multiple messages ucan_message_in_t concateted in
* the following way:
*
* m[n].len <=> the length if message n(including the header in bytes)
* m[n] is is aligned to a 4 byte boundary, hence
* offset(m[0]) := 0;
* offset(m[n+1]) := offset(m[n]) + (m[n].len + 3) & 3
*
* this implies that
* offset(m[n]) % 4 <=> 0
*/
/* Device Global Commands */
enum {
UCAN_DEVICE_GET_FW_STRING = 0,
};
/* UCAN Commands */
enum {
/* start the can transceiver - val defines the operation mode */
UCAN_COMMAND_START = 0,
/* cancel pending transmissions and stop the can transceiver */
UCAN_COMMAND_STOP = 1,
/* send can transceiver into low-power sleep mode */
UCAN_COMMAND_SLEEP = 2,
/* wake up can transceiver from low-power sleep mode */
UCAN_COMMAND_WAKEUP = 3,
/* reset the can transceiver */
UCAN_COMMAND_RESET = 4,
/* get piece of info from the can transceiver - subcmd defines what
* piece
*/
UCAN_COMMAND_GET = 5,
/* clear or disable hardware filter - subcmd defines which of the two */
UCAN_COMMAND_FILTER = 6,
/* Setup bittiming */
UCAN_COMMAND_SET_BITTIMING = 7,
/* recover from bus-off state */
UCAN_COMMAND_RESTART = 8,
};
/* UCAN_COMMAND_START and UCAN_COMMAND_GET_INFO operation modes (bitmap).
* Undefined bits must be set to 0.
*/
enum {
UCAN_MODE_LOOPBACK = BIT(0),
UCAN_MODE_SILENT = BIT(1),
UCAN_MODE_3_SAMPLES = BIT(2),
UCAN_MODE_ONE_SHOT = BIT(3),
UCAN_MODE_BERR_REPORT = BIT(4),
};
/* UCAN_COMMAND_GET subcommands */
enum {
UCAN_COMMAND_GET_INFO = 0,
UCAN_COMMAND_GET_PROTOCOL_VERSION = 1,
};
/* UCAN_COMMAND_FILTER subcommands */
enum {
UCAN_FILTER_CLEAR = 0,
UCAN_FILTER_DISABLE = 1,
UCAN_FILTER_ENABLE = 2,
};
/* OUT endpoint message types */
enum {
UCAN_OUT_TX = 2, /* transmit a CAN frame */
};
/* IN endpoint message types */
enum {
UCAN_IN_TX_COMPLETE = 1, /* CAN frame transmission completed */
UCAN_IN_RX = 2, /* CAN frame received */
};
struct ucan_ctl_cmd_start {
__le16 mode; /* OR-ing any of UCAN_MODE_* */
} __packed;
struct ucan_ctl_cmd_set_bittiming {
__le32 tq; /* Time quanta (TQ) in nanoseconds */
__le16 brp; /* TQ Prescaler */
__le16 sample_point; /* Samplepoint on tenth percent */
u8 prop_seg; /* Propagation segment in TQs */
u8 phase_seg1; /* Phase buffer segment 1 in TQs */
u8 phase_seg2; /* Phase buffer segment 2 in TQs */
u8 sjw; /* Synchronisation jump width in TQs */
} __packed;
struct ucan_ctl_cmd_device_info {
__le32 freq; /* Clock Frequency for tq generation */
u8 tx_fifo; /* Size of the transmission fifo */
u8 sjw_max; /* can_bittiming fields... */
u8 tseg1_min;
u8 tseg1_max;
u8 tseg2_min;
u8 tseg2_max;
__le16 brp_inc;
__le32 brp_min;
__le32 brp_max; /* ...can_bittiming fields */
__le16 ctrlmodes; /* supported control modes */
__le16 hwfilter; /* Number of HW filter banks */
__le16 rxmboxes; /* Number of receive Mailboxes */
} __packed;
struct ucan_ctl_cmd_get_protocol_version {
__le32 version;
} __packed;
union ucan_ctl_payload {
/* Setup Bittiming
* bmRequest == UCAN_COMMAND_START
*/
struct ucan_ctl_cmd_start cmd_start;
/* Setup Bittiming
* bmRequest == UCAN_COMMAND_SET_BITTIMING
*/
struct ucan_ctl_cmd_set_bittiming cmd_set_bittiming;
/* Get Device Information
* bmRequest == UCAN_COMMAND_GET; wValue = UCAN_COMMAND_GET_INFO
*/
struct ucan_ctl_cmd_device_info cmd_get_device_info;
/* Get Protocol Version
* bmRequest == UCAN_COMMAND_GET;
* wValue = UCAN_COMMAND_GET_PROTOCOL_VERSION
*/
struct ucan_ctl_cmd_get_protocol_version cmd_get_protocol_version;
u8 raw[128];
} __packed;
enum {
UCAN_TX_COMPLETE_SUCCESS = BIT(0),
};
/* Transmission Complete within ucan_message_in */
struct ucan_tx_complete_entry_t {
u8 echo_index;
u8 flags;
} __packed __aligned(0x2);
/* CAN Data message format within ucan_message_in/out */
struct ucan_can_msg {
/* note DLC is computed by
* msg.len - sizeof (msg.len)
* - sizeof (msg.type)
* - sizeof (msg.can_msg.id)
*/
__le32 id;
union {
u8 data[CAN_MAX_DLEN]; /* Data of CAN frames */
u8 dlc; /* RTR dlc */
};
} __packed;
/* OUT Endpoint, outbound messages */
struct ucan_message_out {
__le16 len; /* Length of the content include header */
u8 type; /* UCAN_OUT_TX and friends */
u8 subtype; /* command sub type */
union {
/* Transmit CAN frame
* (type == UCAN_TX) && ((msg.can_msg.id & CAN_RTR_FLAG) == 0)
* subtype stores the echo id
*/
struct ucan_can_msg can_msg;
} msg;
} __packed __aligned(0x4);
/* IN Endpoint, inbound messages */
struct ucan_message_in {
__le16 len; /* Length of the content include header */
u8 type; /* UCAN_IN_RX and friends */
u8 subtype; /* command sub type */
union {
/* CAN Frame received
* (type == UCAN_IN_RX)
* && ((msg.can_msg.id & CAN_RTR_FLAG) == 0)
*/
struct ucan_can_msg can_msg;
/* CAN transmission complete
* (type == UCAN_IN_TX_COMPLETE)
*/
struct ucan_tx_complete_entry_t can_tx_complete_msg[0];
} __aligned(0x4) msg;
} __packed;
/* Macros to calculate message lengths */
#define UCAN_OUT_HDR_SIZE offsetof(struct ucan_message_out, msg)
#define UCAN_IN_HDR_SIZE offsetof(struct ucan_message_in, msg)
#define UCAN_IN_LEN(member) (UCAN_OUT_HDR_SIZE + sizeof(member))
struct ucan_priv;
/* Context Information for transmission URBs */
struct ucan_urb_context {
struct ucan_priv *up;
u8 dlc;
bool allocated;
};
/* Information reported by the USB device */
struct ucan_device_info {
struct can_bittiming_const bittiming_const;
u8 tx_fifo;
};
/* Driver private data */
struct ucan_priv {
/* must be the first member */
struct can_priv can;
/* linux USB device structures */
struct usb_device *udev;
struct usb_interface *intf;
struct net_device *netdev;
/* lock for can->echo_skb (used around
* can_put/get/free_echo_skb
*/
spinlock_t echo_skb_lock;
/* usb device information information */
u8 intf_index;
u8 in_ep_addr;
u8 out_ep_addr;
u16 in_ep_size;
/* transmission and reception buffers */
struct usb_anchor rx_urbs;
struct usb_anchor tx_urbs;
union ucan_ctl_payload *ctl_msg_buffer;
struct ucan_device_info device_info;
/* transmission control information and locks */
spinlock_t context_lock;
unsigned int available_tx_urbs;
struct ucan_urb_context *context_array;
};
static u8 ucan_get_can_dlc(struct ucan_can_msg *msg, u16 len)
{
if (le32_to_cpu(msg->id) & CAN_RTR_FLAG)
return get_can_dlc(msg->dlc);
else
return get_can_dlc(len - (UCAN_IN_HDR_SIZE + sizeof(msg->id)));
}
static void ucan_release_context_array(struct ucan_priv *up)
{
if (!up->context_array)
return;
/* lock is not needed because, driver is currently opening or closing */
up->available_tx_urbs = 0;
kfree(up->context_array);
up->context_array = NULL;
}
static int ucan_alloc_context_array(struct ucan_priv *up)
{
int i;
/* release contexts if any */
ucan_release_context_array(up);
up->context_array = kcalloc(up->device_info.tx_fifo,
sizeof(*up->context_array),
GFP_KERNEL);
if (!up->context_array) {
netdev_err(up->netdev,
"Not enough memory to allocate tx contexts\n");
return -ENOMEM;
}
for (i = 0; i < up->device_info.tx_fifo; i++) {
up->context_array[i].allocated = false;
up->context_array[i].up = up;
}
/* lock is not needed because, driver is currently opening */
up->available_tx_urbs = up->device_info.tx_fifo;
return 0;
}
static struct ucan_urb_context *ucan_alloc_context(struct ucan_priv *up)
{
int i;
unsigned long flags;
struct ucan_urb_context *ret = NULL;
if (WARN_ON_ONCE(!up->context_array))
return NULL;
/* execute context operation atomically */
spin_lock_irqsave(&up->context_lock, flags);
for (i = 0; i < up->device_info.tx_fifo; i++) {
if (!up->context_array[i].allocated) {
/* update context */
ret = &up->context_array[i];
up->context_array[i].allocated = true;
/* stop queue if necessary */
up->available_tx_urbs--;
if (!up->available_tx_urbs)
netif_stop_queue(up->netdev);
break;
}
}
spin_unlock_irqrestore(&up->context_lock, flags);
return ret;
}
static bool ucan_release_context(struct ucan_priv *up,
struct ucan_urb_context *ctx)
{
unsigned long flags;
bool ret = false;
if (WARN_ON_ONCE(!up->context_array))
return false;
/* execute context operation atomically */
spin_lock_irqsave(&up->context_lock, flags);
/* context was not allocated, maybe the device sent garbage */
if (ctx->allocated) {
ctx->allocated = false;
/* check if the queue needs to be woken */
if (!up->available_tx_urbs)
netif_wake_queue(up->netdev);
up->available_tx_urbs++;
ret = true;
}
spin_unlock_irqrestore(&up->context_lock, flags);
return ret;
}
static int ucan_ctrl_command_out(struct ucan_priv *up,
u8 cmd, u16 subcmd, u16 datalen)
{
return usb_control_msg(up->udev,
usb_sndctrlpipe(up->udev, 0),
cmd,
USB_DIR_OUT | USB_TYPE_VENDOR |
USB_RECIP_INTERFACE,
subcmd,
up->intf_index,
up->ctl_msg_buffer,
datalen,
UCAN_USB_CTL_PIPE_TIMEOUT);
}
static int ucan_device_request_in(struct ucan_priv *up,
u8 cmd, u16 subcmd, u16 datalen)
{
return usb_control_msg(up->udev,
usb_rcvctrlpipe(up->udev, 0),
cmd,
USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
subcmd,
0,
up->ctl_msg_buffer,
datalen,
UCAN_USB_CTL_PIPE_TIMEOUT);
}
/* Parse the device information structure reported by the device and
* setup private variables accordingly
*/
static void ucan_parse_device_info(struct ucan_priv *up,
struct ucan_ctl_cmd_device_info *device_info)
{
struct can_bittiming_const *bittiming =
&up->device_info.bittiming_const;
u16 ctrlmodes;
/* store the data */
up->can.clock.freq = le32_to_cpu(device_info->freq);
up->device_info.tx_fifo = device_info->tx_fifo;
strcpy(bittiming->name, "ucan");
bittiming->tseg1_min = device_info->tseg1_min;
bittiming->tseg1_max = device_info->tseg1_max;
bittiming->tseg2_min = device_info->tseg2_min;
bittiming->tseg2_max = device_info->tseg2_max;
bittiming->sjw_max = device_info->sjw_max;
bittiming->brp_min = le32_to_cpu(device_info->brp_min);
bittiming->brp_max = le32_to_cpu(device_info->brp_max);
bittiming->brp_inc = le16_to_cpu(device_info->brp_inc);
ctrlmodes = le16_to_cpu(device_info->ctrlmodes);
up->can.ctrlmode_supported = 0;
if (ctrlmodes & UCAN_MODE_LOOPBACK)
up->can.ctrlmode_supported |= CAN_CTRLMODE_LOOPBACK;
if (ctrlmodes & UCAN_MODE_SILENT)
up->can.ctrlmode_supported |= CAN_CTRLMODE_LISTENONLY;
if (ctrlmodes & UCAN_MODE_3_SAMPLES)
up->can.ctrlmode_supported |= CAN_CTRLMODE_3_SAMPLES;
if (ctrlmodes & UCAN_MODE_ONE_SHOT)
up->can.ctrlmode_supported |= CAN_CTRLMODE_ONE_SHOT;
if (ctrlmodes & UCAN_MODE_BERR_REPORT)
up->can.ctrlmode_supported |= CAN_CTRLMODE_BERR_REPORTING;
}
/* Handle a CAN error frame that we have received from the device.
* Returns true if the can state has changed.
*/
static bool ucan_handle_error_frame(struct ucan_priv *up,
struct ucan_message_in *m,
canid_t canid)
{
enum can_state new_state = up->can.state;
struct net_device_stats *net_stats = &up->netdev->stats;
struct can_device_stats *can_stats = &up->can.can_stats;
if (canid & CAN_ERR_LOSTARB)
can_stats->arbitration_lost++;
if (canid & CAN_ERR_BUSERROR)
can_stats->bus_error++;
if (canid & CAN_ERR_ACK)
net_stats->tx_errors++;
if (canid & CAN_ERR_BUSOFF)
new_state = CAN_STATE_BUS_OFF;
/* controller problems, details in data[1] */
if (canid & CAN_ERR_CRTL) {
u8 d1 = m->msg.can_msg.data[1];
if (d1 & CAN_ERR_CRTL_RX_OVERFLOW)
net_stats->rx_over_errors++;
/* controller state bits: if multiple are set the worst wins */
if (d1 & CAN_ERR_CRTL_ACTIVE)
new_state = CAN_STATE_ERROR_ACTIVE;
if (d1 & (CAN_ERR_CRTL_RX_WARNING | CAN_ERR_CRTL_TX_WARNING))
new_state = CAN_STATE_ERROR_WARNING;
if (d1 & (CAN_ERR_CRTL_RX_PASSIVE | CAN_ERR_CRTL_TX_PASSIVE))
new_state = CAN_STATE_ERROR_PASSIVE;
}
/* protocol error, details in data[2] */
if (canid & CAN_ERR_PROT) {
u8 d2 = m->msg.can_msg.data[2];
if (d2 & CAN_ERR_PROT_TX)
net_stats->tx_errors++;
else
net_stats->rx_errors++;
}
/* no state change - we are done */
if (up->can.state == new_state)
return false;
/* we switched into a better state */
if (up->can.state > new_state) {
up->can.state = new_state;
return true;
}
/* we switched into a worse state */
up->can.state = new_state;
switch (new_state) {
case CAN_STATE_BUS_OFF:
can_stats->bus_off++;
can_bus_off(up->netdev);
break;
case CAN_STATE_ERROR_PASSIVE:
can_stats->error_passive++;
break;
case CAN_STATE_ERROR_WARNING:
can_stats->error_warning++;
break;
default:
break;
}
return true;
}
/* Callback on reception of a can frame via the IN endpoint
*
* This function allocates an skb and transferres it to the Linux
* network stack
*/
static void ucan_rx_can_msg(struct ucan_priv *up, struct ucan_message_in *m)
{
int len;
canid_t canid;
struct can_frame *cf;
struct sk_buff *skb;
struct net_device_stats *stats = &up->netdev->stats;
/* get the contents of the length field */
len = le16_to_cpu(m->len);
/* check sanity */
if (len < UCAN_IN_HDR_SIZE + sizeof(m->msg.can_msg.id)) {
netdev_warn(up->netdev, "invalid input message len: %d\n", len);
return;
}
/* handle error frames */
canid = le32_to_cpu(m->msg.can_msg.id);
if (canid & CAN_ERR_FLAG) {
bool busstate_changed = ucan_handle_error_frame(up, m, canid);
/* if berr-reporting is off only state changes get through */
if (!(up->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING) &&
!busstate_changed)
return;
} else {
canid_t canid_mask;
/* compute the mask for canid */
canid_mask = CAN_RTR_FLAG;
if (canid & CAN_EFF_FLAG)
canid_mask |= CAN_EFF_MASK | CAN_EFF_FLAG;
else
canid_mask |= CAN_SFF_MASK;
if (canid & ~canid_mask)
netdev_warn(up->netdev,
"unexpected bits set (canid %x, mask %x)",
canid, canid_mask);
canid &= canid_mask;
}
/* allocate skb */
skb = alloc_can_skb(up->netdev, &cf);
if (!skb)
return;
/* fill the can frame */
cf->can_id = canid;
/* compute DLC taking RTR_FLAG into account */
cf->can_dlc = ucan_get_can_dlc(&m->msg.can_msg, len);
/* copy the payload of non RTR frames */
if (!(cf->can_id & CAN_RTR_FLAG) || (cf->can_id & CAN_ERR_FLAG))
memcpy(cf->data, m->msg.can_msg.data, cf->can_dlc);
/* don't count error frames as real packets */
stats->rx_packets++;
stats->rx_bytes += cf->can_dlc;
/* pass it to Linux */
netif_rx(skb);
}
/* callback indicating completed transmission */
static void ucan_tx_complete_msg(struct ucan_priv *up,
struct ucan_message_in *m)
{
unsigned long flags;
u16 count, i;
u8 echo_index, dlc;
u16 len = le16_to_cpu(m->len);
struct ucan_urb_context *context;
if (len < UCAN_IN_HDR_SIZE || (len % 2 != 0)) {
netdev_err(up->netdev, "invalid tx complete length\n");
return;
}
count = (len - UCAN_IN_HDR_SIZE) / 2;
for (i = 0; i < count; i++) {
/* we did not submit such echo ids */
echo_index = m->msg.can_tx_complete_msg[i].echo_index;
if (echo_index >= up->device_info.tx_fifo) {
up->netdev->stats.tx_errors++;
netdev_err(up->netdev,
"invalid echo_index %d received\n",
echo_index);
continue;
}
/* gather information from the context */
context = &up->context_array[echo_index];
dlc = READ_ONCE(context->dlc);
/* Release context and restart queue if necessary.
* Also check if the context was allocated
*/
if (!ucan_release_context(up, context))
continue;
spin_lock_irqsave(&up->echo_skb_lock, flags);
if (m->msg.can_tx_complete_msg[i].flags &
UCAN_TX_COMPLETE_SUCCESS) {
/* update statistics */
up->netdev->stats.tx_packets++;
up->netdev->stats.tx_bytes += dlc;
can_get_echo_skb(up->netdev, echo_index);
} else {
up->netdev->stats.tx_dropped++;
can_free_echo_skb(up->netdev, echo_index);
}
spin_unlock_irqrestore(&up->echo_skb_lock, flags);
}
}
/* callback on reception of a USB message */
static void ucan_read_bulk_callback(struct urb *urb)
{
int ret;
int pos;
struct ucan_priv *up = urb->context;
struct net_device *netdev = up->netdev;
struct ucan_message_in *m;
/* the device is not up and the driver should not receive any
* data on the bulk in pipe
*/
if (WARN_ON(!up->context_array)) {
usb_free_coherent(up->udev,
up->in_ep_size,
urb->transfer_buffer,
urb->transfer_dma);
return;
}
/* check URB status */
switch (urb->status) {
case 0:
break;
case -ENOENT:
case -EPIPE:
case -EPROTO:
case -ESHUTDOWN:
case -ETIME:
/* urb is not resubmitted -> free dma data */
usb_free_coherent(up->udev,
up->in_ep_size,
urb->transfer_buffer,
urb->transfer_dma);
netdev_dbg(up->netdev, "not resumbmitting urb; status: %d\n",
urb->status);
return;
default:
goto resubmit;
}
/* sanity check */
if (!netif_device_present(netdev))
return;
/* iterate over input */
pos = 0;
while (pos < urb->actual_length) {
int len;
/* check sanity (length of header) */
if ((urb->actual_length - pos) < UCAN_IN_HDR_SIZE) {
netdev_warn(up->netdev,
"invalid message (short; no hdr; l:%d)\n",
urb->actual_length);
goto resubmit;
}
/* setup the message address */
m = (struct ucan_message_in *)
((u8 *)urb->transfer_buffer + pos);
len = le16_to_cpu(m->len);
/* check sanity (length of content) */
if (urb->actual_length - pos < len) {
netdev_warn(up->netdev,
"invalid message (short; no data; l:%d)\n",
urb->actual_length);
print_hex_dump(KERN_WARNING,
"raw data: ",
DUMP_PREFIX_ADDRESS,
16,
1,
urb->transfer_buffer,
urb->actual_length,
true);
goto resubmit;
}
switch (m->type) {
case UCAN_IN_RX:
ucan_rx_can_msg(up, m);
break;
case UCAN_IN_TX_COMPLETE:
ucan_tx_complete_msg(up, m);
break;
default:
netdev_warn(up->netdev,
"invalid message (type; t:%d)\n",
m->type);
break;
}
/* proceed to next message */
pos += len;
/* align to 4 byte boundary */
pos = round_up(pos, 4);
}
resubmit:
/* resubmit urb when done */
usb_fill_bulk_urb(urb, up->udev,
usb_rcvbulkpipe(up->udev,
up->in_ep_addr),
urb->transfer_buffer,
up->in_ep_size,
ucan_read_bulk_callback,
up);
usb_anchor_urb(urb, &up->rx_urbs);
ret = usb_submit_urb(urb, GFP_KERNEL);
if (ret < 0) {
netdev_err(up->netdev,
"failed resubmitting read bulk urb: %d\n",
ret);
usb_unanchor_urb(urb);
usb_free_coherent(up->udev,
up->in_ep_size,
urb->transfer_buffer,
urb->transfer_dma);
if (ret == -ENODEV)
netif_device_detach(netdev);
}
}
/* callback after transmission of a USB message */
static void ucan_write_bulk_callback(struct urb *urb)
{
unsigned long flags;
struct ucan_priv *up;
struct ucan_urb_context *context = urb->context;
/* get the urb context */
if (WARN_ON_ONCE(!context))
return;
/* free up our allocated buffer */
usb_free_coherent(urb->dev,
sizeof(struct ucan_message_out),
urb->transfer_buffer,
urb->transfer_dma);
up = context->up;
if (WARN_ON_ONCE(!up))
return;
/* sanity check */
if (!netif_device_present(up->netdev))
return;
/* transmission failed (USB - the device will not send a TX complete) */
if (urb->status) {
netdev_warn(up->netdev,
"failed to transmit USB message to device: %d\n",
urb->status);
/* update counters an cleanup */
spin_lock_irqsave(&up->echo_skb_lock, flags);
can_free_echo_skb(up->netdev, context - up->context_array);
spin_unlock_irqrestore(&up->echo_skb_lock, flags);
up->netdev->stats.tx_dropped++;
/* release context and restart the queue if necessary */
if (!ucan_release_context(up, context))
netdev_err(up->netdev,
"urb failed, failed to release context\n");
}
}
static void ucan_cleanup_rx_urbs(struct ucan_priv *up, struct urb **urbs)
{
int i;
for (i = 0; i < UCAN_MAX_RX_URBS; i++) {
if (urbs[i]) {
usb_unanchor_urb(urbs[i]);
usb_free_coherent(up->udev,
up->in_ep_size,
urbs[i]->transfer_buffer,
urbs[i]->transfer_dma);
usb_free_urb(urbs[i]);
}
}
memset(urbs, 0, sizeof(*urbs) * UCAN_MAX_RX_URBS);
}
static int ucan_prepare_and_anchor_rx_urbs(struct ucan_priv *up,
struct urb **urbs)
{
int i;
memset(urbs, 0, sizeof(*urbs) * UCAN_MAX_RX_URBS);
for (i = 0; i < UCAN_MAX_RX_URBS; i++) {
void *buf;
urbs[i] = usb_alloc_urb(0, GFP_KERNEL);
if (!urbs[i])
goto err;
buf = usb_alloc_coherent(up->udev,
up->in_ep_size,
GFP_KERNEL, &urbs[i]->transfer_dma);
if (!buf) {
/* cleanup this urb */
usb_free_urb(urbs[i]);
urbs[i] = NULL;
goto err;
}
usb_fill_bulk_urb(urbs[i], up->udev,
usb_rcvbulkpipe(up->udev,
up->in_ep_addr),
buf,
up->in_ep_size,
ucan_read_bulk_callback,
up);
urbs[i]->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
usb_anchor_urb(urbs[i], &up->rx_urbs);
}
return 0;
err:
/* cleanup other unsubmitted urbs */
ucan_cleanup_rx_urbs(up, urbs);
return -ENOMEM;
}
/* Submits rx urbs with the semantic: Either submit all, or cleanup
* everything. I case of errors submitted urbs are killed and all urbs in
* the array are freed. I case of no errors every entry in the urb
* array is set to NULL.
*/
static int ucan_submit_rx_urbs(struct ucan_priv *up, struct urb **urbs)
{
int i, ret;
/* Iterate over all urbs to submit. On success remove the urb
* from the list.
*/
for (i = 0; i < UCAN_MAX_RX_URBS; i++) {
ret = usb_submit_urb(urbs[i], GFP_KERNEL);
if (ret) {
netdev_err(up->netdev,
"could not submit urb; code: %d\n",
ret);
goto err;
}
/* Anchor URB and drop reference, USB core will take
* care of freeing it
*/
usb_free_urb(urbs[i]);
urbs[i] = NULL;
}
return 0;
err:
/* Cleanup unsubmitted urbs */
ucan_cleanup_rx_urbs(up, urbs);
/* Kill urbs that are already submitted */
usb_kill_anchored_urbs(&up->rx_urbs);
return ret;
}
/* Open the network device */
static int ucan_open(struct net_device *netdev)
{
int ret, ret_cleanup;
u16 ctrlmode;
struct urb *urbs[UCAN_MAX_RX_URBS];
struct ucan_priv *up = netdev_priv(netdev);
ret = ucan_alloc_context_array(up);
if (ret)
return ret;
/* Allocate and prepare IN URBS - allocated and anchored
* urbs are stored in urbs[] for clean
*/
ret = ucan_prepare_and_anchor_rx_urbs(up, urbs);
if (ret)
goto err_contexts;
/* Check the control mode */
ctrlmode = 0;
if (up->can.ctrlmode & CAN_CTRLMODE_LOOPBACK)
ctrlmode |= UCAN_MODE_LOOPBACK;
if (up->can.ctrlmode & CAN_CTRLMODE_LISTENONLY)
ctrlmode |= UCAN_MODE_SILENT;
if (up->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES)
ctrlmode |= UCAN_MODE_3_SAMPLES;
if (up->can.ctrlmode & CAN_CTRLMODE_ONE_SHOT)
ctrlmode |= UCAN_MODE_ONE_SHOT;
/* Enable this in any case - filtering is down within the
* receive path
*/
ctrlmode |= UCAN_MODE_BERR_REPORT;
up->ctl_msg_buffer->cmd_start.mode = cpu_to_le16(ctrlmode);
/* Driver is ready to receive data - start the USB device */
ret = ucan_ctrl_command_out(up, UCAN_COMMAND_START, 0, 2);
if (ret < 0) {
netdev_err(up->netdev,
"could not start device, code: %d\n",
ret);
goto err_reset;
}
/* Call CAN layer open */
ret = open_candev(netdev);
if (ret)
goto err_stop;
/* Driver is ready to receive data. Submit RX URBS */
ret = ucan_submit_rx_urbs(up, urbs);
if (ret)
goto err_stop;
up->can.state = CAN_STATE_ERROR_ACTIVE;
/* Start the network queue */
netif_start_queue(netdev);
return 0;
err_stop:
/* The device have started already stop it */
ret_cleanup = ucan_ctrl_command_out(up, UCAN_COMMAND_STOP, 0, 0);
if (ret_cleanup < 0)
netdev_err(up->netdev,
"could not stop device, code: %d\n",
ret_cleanup);
err_reset:
/* The device might have received data, reset it for
* consistent state
*/
ret_cleanup = ucan_ctrl_command_out(up, UCAN_COMMAND_RESET, 0, 0);
if (ret_cleanup < 0)
netdev_err(up->netdev,
"could not reset device, code: %d\n",
ret_cleanup);
/* clean up unsubmitted urbs */
ucan_cleanup_rx_urbs(up, urbs);
err_contexts:
ucan_release_context_array(up);
return ret;
}
static struct urb *ucan_prepare_tx_urb(struct ucan_priv *up,
struct ucan_urb_context *context,
struct can_frame *cf,
u8 echo_index)
{
int mlen;
struct urb *urb;
struct ucan_message_out *m;
/* create a URB, and a buffer for it, and copy the data to the URB */
urb = usb_alloc_urb(0, GFP_ATOMIC);
if (!urb) {
netdev_err(up->netdev, "no memory left for URBs\n");
return NULL;
}
m = usb_alloc_coherent(up->udev,
sizeof(struct ucan_message_out),
GFP_ATOMIC,
&urb->transfer_dma);
if (!m) {
netdev_err(up->netdev, "no memory left for USB buffer\n");
usb_free_urb(urb);
return NULL;
}
/* build the USB message */
m->type = UCAN_OUT_TX;
m->msg.can_msg.id = cpu_to_le32(cf->can_id);
if (cf->can_id & CAN_RTR_FLAG) {
mlen = UCAN_OUT_HDR_SIZE +
offsetof(struct ucan_can_msg, dlc) +
sizeof(m->msg.can_msg.dlc);
m->msg.can_msg.dlc = cf->can_dlc;
} else {
mlen = UCAN_OUT_HDR_SIZE +
sizeof(m->msg.can_msg.id) + cf->can_dlc;
memcpy(m->msg.can_msg.data, cf->data, cf->can_dlc);
}
m->len = cpu_to_le16(mlen);
context->dlc = cf->can_dlc;
m->subtype = echo_index;
/* build the urb */
usb_fill_bulk_urb(urb, up->udev,
usb_sndbulkpipe(up->udev,
up->out_ep_addr),
m, mlen, ucan_write_bulk_callback, context);
urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
return urb;
}
static void ucan_clean_up_tx_urb(struct ucan_priv *up, struct urb *urb)
{
usb_free_coherent(up->udev, sizeof(struct ucan_message_out),
urb->transfer_buffer, urb->transfer_dma);
usb_free_urb(urb);
}
/* callback when Linux needs to send a can frame */
static netdev_tx_t ucan_start_xmit(struct sk_buff *skb,
struct net_device *netdev)
{
unsigned long flags;
int ret;
u8 echo_index;
struct urb *urb;
struct ucan_urb_context *context;
struct ucan_priv *up = netdev_priv(netdev);
struct can_frame *cf = (struct can_frame *)skb->data;
/* check skb */
if (can_dropped_invalid_skb(netdev, skb))
return NETDEV_TX_OK;
/* allocate a context and slow down tx path, if fifo state is low */
context = ucan_alloc_context(up);
echo_index = context - up->context_array;
if (WARN_ON_ONCE(!context))
return NETDEV_TX_BUSY;
/* prepare urb for transmission */
urb = ucan_prepare_tx_urb(up, context, cf, echo_index);
if (!urb)
goto drop;
/* put the skb on can loopback stack */
spin_lock_irqsave(&up->echo_skb_lock, flags);
can_put_echo_skb(skb, up->netdev, echo_index);
spin_unlock_irqrestore(&up->echo_skb_lock, flags);
/* transmit it */
usb_anchor_urb(urb, &up->tx_urbs);
ret = usb_submit_urb(urb, GFP_ATOMIC);
/* cleanup urb */
if (ret) {
/* on error, clean up */
usb_unanchor_urb(urb);
ucan_clean_up_tx_urb(up, urb);
if (!ucan_release_context(up, context))
netdev_err(up->netdev,
"xmit err: failed to release context\n");
/* remove the skb from the echo stack - this also
* frees the skb
*/
spin_lock_irqsave(&up->echo_skb_lock, flags);
can_free_echo_skb(up->netdev, echo_index);
spin_unlock_irqrestore(&up->echo_skb_lock, flags);
if (ret == -ENODEV) {
netif_device_detach(up->netdev);
} else {
netdev_warn(up->netdev,
"xmit err: failed to submit urb %d\n",
ret);
up->netdev->stats.tx_dropped++;
}
return NETDEV_TX_OK;
}
netif_trans_update(netdev);
/* release ref, as we do not need the urb anymore */
usb_free_urb(urb);
return NETDEV_TX_OK;
drop:
if (!ucan_release_context(up, context))
netdev_err(up->netdev,
"xmit drop: failed to release context\n");
dev_kfree_skb(skb);
up->netdev->stats.tx_dropped++;
return NETDEV_TX_OK;
}
/* Device goes down
*
* Clean up used resources
*/
static int ucan_close(struct net_device *netdev)
{
int ret;
struct ucan_priv *up = netdev_priv(netdev);
up->can.state = CAN_STATE_STOPPED;
/* stop sending data */
usb_kill_anchored_urbs(&up->tx_urbs);
/* stop receiving data */
usb_kill_anchored_urbs(&up->rx_urbs);
/* stop and reset can device */
ret = ucan_ctrl_command_out(up, UCAN_COMMAND_STOP, 0, 0);
if (ret < 0)
netdev_err(up->netdev,
"could not stop device, code: %d\n",
ret);
ret = ucan_ctrl_command_out(up, UCAN_COMMAND_RESET, 0, 0);
if (ret < 0)
netdev_err(up->netdev,
"could not reset device, code: %d\n",
ret);
netif_stop_queue(netdev);
ucan_release_context_array(up);
close_candev(up->netdev);
return 0;
}
/* CAN driver callbacks */
static const struct net_device_ops ucan_netdev_ops = {
.ndo_open = ucan_open,
.ndo_stop = ucan_close,
.ndo_start_xmit = ucan_start_xmit,
.ndo_change_mtu = can_change_mtu,
};
/* Request to set bittiming
*
* This function generates an USB set bittiming message and transmits
* it to the device
*/
static int ucan_set_bittiming(struct net_device *netdev)
{
int ret;
struct ucan_priv *up = netdev_priv(netdev);
struct ucan_ctl_cmd_set_bittiming *cmd_set_bittiming;
cmd_set_bittiming = &up->ctl_msg_buffer->cmd_set_bittiming;
cmd_set_bittiming->tq = cpu_to_le32(up->can.bittiming.tq);
cmd_set_bittiming->brp = cpu_to_le16(up->can.bittiming.brp);
cmd_set_bittiming->sample_point =
cpu_to_le16(up->can.bittiming.sample_point);
cmd_set_bittiming->prop_seg = up->can.bittiming.prop_seg;
cmd_set_bittiming->phase_seg1 = up->can.bittiming.phase_seg1;
cmd_set_bittiming->phase_seg2 = up->can.bittiming.phase_seg2;
cmd_set_bittiming->sjw = up->can.bittiming.sjw;
ret = ucan_ctrl_command_out(up, UCAN_COMMAND_SET_BITTIMING, 0,
sizeof(*cmd_set_bittiming));
return (ret < 0) ? ret : 0;
}
/* Restart the device to get it out of BUS-OFF state.
* Called when the user runs "ip link set can1 type can restart".
*/
static int ucan_set_mode(struct net_device *netdev, enum can_mode mode)
{
int ret;
unsigned long flags;
struct ucan_priv *up = netdev_priv(netdev);
switch (mode) {
case CAN_MODE_START:
netdev_dbg(up->netdev, "restarting device\n");
ret = ucan_ctrl_command_out(up, UCAN_COMMAND_RESTART, 0, 0);
up->can.state = CAN_STATE_ERROR_ACTIVE;
/* check if queue can be restarted,
* up->available_tx_urbs must be protected by the
* lock
*/
spin_lock_irqsave(&up->context_lock, flags);
if (up->available_tx_urbs > 0)
netif_wake_queue(up->netdev);
spin_unlock_irqrestore(&up->context_lock, flags);
return ret;
default:
return -EOPNOTSUPP;
}
}
/* Probe the device, reset it and gather general device information */
static int ucan_probe(struct usb_interface *intf,
const struct usb_device_id *id)
{
int ret;
int i;
u32 protocol_version;
struct usb_device *udev;
struct net_device *netdev;
struct usb_host_interface *iface_desc;
struct ucan_priv *up;
struct usb_endpoint_descriptor *ep;
u16 in_ep_size;
u16 out_ep_size;
u8 in_ep_addr;
u8 out_ep_addr;
union ucan_ctl_payload *ctl_msg_buffer;
char firmware_str[sizeof(union ucan_ctl_payload) + 1];
udev = interface_to_usbdev(intf);
/* Stage 1 - Interface Parsing
* ---------------------------
*
* Identifie the device USB interface descriptor and its
* endpoints. Probing is aborted on errors.
*/
/* check if the interface is sane */
iface_desc = intf->cur_altsetting;
if (!iface_desc)
return -ENODEV;
dev_info(&udev->dev,
"%s: probing device on interface #%d\n",
UCAN_DRIVER_NAME,
iface_desc->desc.bInterfaceNumber);
/* interface sanity check */
if (iface_desc->desc.bNumEndpoints != 2) {
dev_err(&udev->dev,
"%s: invalid EP count (%d)",
UCAN_DRIVER_NAME, iface_desc->desc.bNumEndpoints);
goto err_firmware_needs_update;
}
/* check interface endpoints */
in_ep_addr = 0;
out_ep_addr = 0;
in_ep_size = 0;
out_ep_size = 0;
for (i = 0; i < iface_desc->desc.bNumEndpoints; i++) {
ep = &iface_desc->endpoint[i].desc;
if (((ep->bEndpointAddress & USB_ENDPOINT_DIR_MASK) != 0) &&
((ep->bmAttributes & USB_ENDPOINT_XFERTYPE_MASK) ==
USB_ENDPOINT_XFER_BULK)) {
/* In Endpoint */
in_ep_addr = ep->bEndpointAddress;
in_ep_addr &= USB_ENDPOINT_NUMBER_MASK;
in_ep_size = le16_to_cpu(ep->wMaxPacketSize);
} else if (((ep->bEndpointAddress & USB_ENDPOINT_DIR_MASK) ==
0) &&
((ep->bmAttributes & USB_ENDPOINT_XFERTYPE_MASK) ==
USB_ENDPOINT_XFER_BULK)) {
/* Out Endpoint */
out_ep_addr = ep->bEndpointAddress;
out_ep_addr &= USB_ENDPOINT_NUMBER_MASK;
out_ep_size = le16_to_cpu(ep->wMaxPacketSize);
}
}
/* check if interface is sane */
if (!in_ep_addr || !out_ep_addr) {
dev_err(&udev->dev, "%s: invalid endpoint configuration\n",
UCAN_DRIVER_NAME);
goto err_firmware_needs_update;
}
if (in_ep_size < sizeof(struct ucan_message_in)) {
dev_err(&udev->dev, "%s: invalid in_ep MaxPacketSize\n",
UCAN_DRIVER_NAME);
goto err_firmware_needs_update;
}
if (out_ep_size < sizeof(struct ucan_message_out)) {
dev_err(&udev->dev, "%s: invalid out_ep MaxPacketSize\n",
UCAN_DRIVER_NAME);
goto err_firmware_needs_update;
}
/* Stage 2 - Device Identification
* -------------------------------
*
* The device interface seems to be a ucan device. Do further
* compatibility checks. On error probing is aborted, on
* success this stage leaves the ctl_msg_buffer with the
* reported contents of a GET_INFO command (supported
* bittimings, tx_fifo depth). This information is used in
* Stage 3 for the final driver initialisation.
*/
/* Prepare Memory for control transferes */
ctl_msg_buffer = devm_kzalloc(&udev->dev,
sizeof(union ucan_ctl_payload),
GFP_KERNEL);
if (!ctl_msg_buffer) {
dev_err(&udev->dev,
"%s: failed to allocate control pipe memory\n",
UCAN_DRIVER_NAME);
return -ENOMEM;
}
/* get protocol version
*
* note: ucan_ctrl_command_* wrappers cannot be used yet
* because `up` is initialised in Stage 3
*/
ret = usb_control_msg(udev,
usb_rcvctrlpipe(udev, 0),
UCAN_COMMAND_GET,
USB_DIR_IN | USB_TYPE_VENDOR |
USB_RECIP_INTERFACE,
UCAN_COMMAND_GET_PROTOCOL_VERSION,
iface_desc->desc.bInterfaceNumber,
ctl_msg_buffer,
sizeof(union ucan_ctl_payload),
UCAN_USB_CTL_PIPE_TIMEOUT);
/* older firmware version do not support this command - those
* are not supported by this drive
*/
if (ret != 4) {
dev_err(&udev->dev,
"%s: could not read protocol version, ret=%d\n",
UCAN_DRIVER_NAME, ret);
if (ret >= 0)
ret = -EINVAL;
goto err_firmware_needs_update;
}
/* this driver currently supports protocol version 3 only */
protocol_version =
le32_to_cpu(ctl_msg_buffer->cmd_get_protocol_version.version);
if (protocol_version < UCAN_PROTOCOL_VERSION_MIN ||
protocol_version > UCAN_PROTOCOL_VERSION_MAX) {
dev_err(&udev->dev,
"%s: device protocol version %d is not supported\n",
UCAN_DRIVER_NAME, protocol_version);
goto err_firmware_needs_update;
}
/* request the device information and store it in ctl_msg_buffer
*
* note: ucan_ctrl_command_* wrappers connot be used yet
* because `up` is initialised in Stage 3
*/
ret = usb_control_msg(udev,
usb_rcvctrlpipe(udev, 0),
UCAN_COMMAND_GET,
USB_DIR_IN | USB_TYPE_VENDOR |
USB_RECIP_INTERFACE,
UCAN_COMMAND_GET_INFO,
iface_desc->desc.bInterfaceNumber,
ctl_msg_buffer,
sizeof(ctl_msg_buffer->cmd_get_device_info),
UCAN_USB_CTL_PIPE_TIMEOUT);
if (ret < 0) {
dev_err(&udev->dev, "%s: failed to retrieve device info\n",
UCAN_DRIVER_NAME);
goto err_firmware_needs_update;
}
if (ret < sizeof(ctl_msg_buffer->cmd_get_device_info)) {
dev_err(&udev->dev, "%s: device reported invalid device info\n",
UCAN_DRIVER_NAME);
goto err_firmware_needs_update;
}
if (ctl_msg_buffer->cmd_get_device_info.tx_fifo == 0) {
dev_err(&udev->dev,
"%s: device reported invalid tx-fifo size\n",
UCAN_DRIVER_NAME);
goto err_firmware_needs_update;
}
/* Stage 3 - Driver Initialisation
* -------------------------------
*
* Register device to Linux, prepare private structures and
* reset the device.
*/
/* allocate driver resources */
netdev = alloc_candev(sizeof(struct ucan_priv),
ctl_msg_buffer->cmd_get_device_info.tx_fifo);
if (!netdev) {
dev_err(&udev->dev,
"%s: cannot allocate candev\n", UCAN_DRIVER_NAME);
return -ENOMEM;
}
up = netdev_priv(netdev);
/* initialze data */
up->udev = udev;
up->intf = intf;
up->netdev = netdev;
up->intf_index = iface_desc->desc.bInterfaceNumber;
up->in_ep_addr = in_ep_addr;
up->out_ep_addr = out_ep_addr;
up->in_ep_size = in_ep_size;
up->ctl_msg_buffer = ctl_msg_buffer;
up->context_array = NULL;
up->available_tx_urbs = 0;
up->can.state = CAN_STATE_STOPPED;
up->can.bittiming_const = &up->device_info.bittiming_const;
up->can.do_set_bittiming = ucan_set_bittiming;
up->can.do_set_mode = &ucan_set_mode;
spin_lock_init(&up->context_lock);
spin_lock_init(&up->echo_skb_lock);
netdev->netdev_ops = &ucan_netdev_ops;
usb_set_intfdata(intf, up);
SET_NETDEV_DEV(netdev, &intf->dev);
/* parse device information
* the data retrieved in Stage 2 is still available in
* up->ctl_msg_buffer
*/
ucan_parse_device_info(up, &ctl_msg_buffer->cmd_get_device_info);
/* just print some device information - if available */
ret = ucan_device_request_in(up, UCAN_DEVICE_GET_FW_STRING, 0,
sizeof(union ucan_ctl_payload));
if (ret > 0) {
/* copy string while ensuring zero terminiation */
strncpy(firmware_str, up->ctl_msg_buffer->raw,
sizeof(union ucan_ctl_payload));
firmware_str[sizeof(union ucan_ctl_payload)] = '\0';
} else {
strcpy(firmware_str, "unknown");
}
/* device is compatible, reset it */
ret = ucan_ctrl_command_out(up, UCAN_COMMAND_RESET, 0, 0);
if (ret < 0)
goto err_free_candev;
init_usb_anchor(&up->rx_urbs);
init_usb_anchor(&up->tx_urbs);
up->can.state = CAN_STATE_STOPPED;
/* register the device */
ret = register_candev(netdev);
if (ret)
goto err_free_candev;
/* initialisation complete, log device info */
netdev_info(up->netdev, "registered device\n");
netdev_info(up->netdev, "firmware string: %s\n", firmware_str);
/* success */
return 0;
err_free_candev:
free_candev(netdev);
return ret;
err_firmware_needs_update:
dev_err(&udev->dev,
"%s: probe failed; try to update the device firmware\n",
UCAN_DRIVER_NAME);
return -ENODEV;
}
/* disconnect the device */
static void ucan_disconnect(struct usb_interface *intf)
{
struct usb_device *udev;
struct ucan_priv *up = usb_get_intfdata(intf);
udev = interface_to_usbdev(intf);
usb_set_intfdata(intf, NULL);
if (up) {
unregister_netdev(up->netdev);
free_candev(up->netdev);
}
}
static struct usb_device_id ucan_table[] = {
/* Mule (soldered onto compute modules) */
{USB_DEVICE_INTERFACE_NUMBER(0x2294, 0x425a, 0)},
/* Seal (standalone USB stick) */
{USB_DEVICE_INTERFACE_NUMBER(0x2294, 0x425b, 0)},
{} /* Terminating entry */
};
MODULE_DEVICE_TABLE(usb, ucan_table);
/* driver callbacks */
static struct usb_driver ucan_driver = {
.name = UCAN_DRIVER_NAME,
.probe = ucan_probe,
.disconnect = ucan_disconnect,
.id_table = ucan_table,
};
module_usb_driver(ucan_driver);
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Martin Elshuber <martin.elshuber@theobroma-systems.com>");
MODULE_AUTHOR("Jakob Unterwurzacher <jakob.unterwurzacher@theobroma-systems.com>");
MODULE_DESCRIPTION("Driver for Theobroma Systems UCAN devices");
......@@ -2,7 +2,7 @@
*
* Copyright (C) 2012 - 2014 Xilinx, Inc.
* Copyright (C) 2009 PetaLogix. All rights reserved.
* Copyright (C) 2017 Sandvik Mining and Construction Oy
* Copyright (C) 2017 - 2018 Sandvik Mining and Construction Oy
*
* Description:
* This driver is developed for Axi CAN IP and for Zynq CANPS Controller.
......@@ -51,16 +51,34 @@ enum xcan_reg {
XCAN_ISR_OFFSET = 0x1C, /* Interrupt status */
XCAN_IER_OFFSET = 0x20, /* Interrupt enable */
XCAN_ICR_OFFSET = 0x24, /* Interrupt clear */
XCAN_TXFIFO_ID_OFFSET = 0x30,/* TX FIFO ID */
XCAN_TXFIFO_DLC_OFFSET = 0x34, /* TX FIFO DLC */
XCAN_TXFIFO_DW1_OFFSET = 0x38, /* TX FIFO Data Word 1 */
XCAN_TXFIFO_DW2_OFFSET = 0x3C, /* TX FIFO Data Word 2 */
XCAN_RXFIFO_ID_OFFSET = 0x50, /* RX FIFO ID */
XCAN_RXFIFO_DLC_OFFSET = 0x54, /* RX FIFO DLC */
XCAN_RXFIFO_DW1_OFFSET = 0x58, /* RX FIFO Data Word 1 */
XCAN_RXFIFO_DW2_OFFSET = 0x5C, /* RX FIFO Data Word 2 */
/* not on CAN FD cores */
XCAN_TXFIFO_OFFSET = 0x30, /* TX FIFO base */
XCAN_RXFIFO_OFFSET = 0x50, /* RX FIFO base */
XCAN_AFR_OFFSET = 0x60, /* Acceptance Filter */
/* only on CAN FD cores */
XCAN_TRR_OFFSET = 0x0090, /* TX Buffer Ready Request */
XCAN_AFR_EXT_OFFSET = 0x00E0, /* Acceptance Filter */
XCAN_FSR_OFFSET = 0x00E8, /* RX FIFO Status */
XCAN_TXMSG_BASE_OFFSET = 0x0100, /* TX Message Space */
XCAN_RXMSG_BASE_OFFSET = 0x1100, /* RX Message Space */
};
#define XCAN_FRAME_ID_OFFSET(frame_base) ((frame_base) + 0x00)
#define XCAN_FRAME_DLC_OFFSET(frame_base) ((frame_base) + 0x04)
#define XCAN_FRAME_DW1_OFFSET(frame_base) ((frame_base) + 0x08)
#define XCAN_FRAME_DW2_OFFSET(frame_base) ((frame_base) + 0x0C)
#define XCAN_CANFD_FRAME_SIZE 0x48
#define XCAN_TXMSG_FRAME_OFFSET(n) (XCAN_TXMSG_BASE_OFFSET + \
XCAN_CANFD_FRAME_SIZE * (n))
#define XCAN_RXMSG_FRAME_OFFSET(n) (XCAN_RXMSG_BASE_OFFSET + \
XCAN_CANFD_FRAME_SIZE * (n))
/* the single TX mailbox used by this driver on CAN FD HW */
#define XCAN_TX_MAILBOX_IDX 0
/* CAN register bit masks - XCAN_<REG>_<BIT>_MASK */
#define XCAN_SRR_CEN_MASK 0x00000002 /* CAN enable */
#define XCAN_SRR_RESET_MASK 0x00000001 /* Soft Reset the CAN core */
......@@ -70,6 +88,9 @@ enum xcan_reg {
#define XCAN_BTR_SJW_MASK 0x00000180 /* Synchronous jump width */
#define XCAN_BTR_TS2_MASK 0x00000070 /* Time segment 2 */
#define XCAN_BTR_TS1_MASK 0x0000000F /* Time segment 1 */
#define XCAN_BTR_SJW_MASK_CANFD 0x000F0000 /* Synchronous jump width */
#define XCAN_BTR_TS2_MASK_CANFD 0x00000F00 /* Time segment 2 */
#define XCAN_BTR_TS1_MASK_CANFD 0x0000003F /* Time segment 1 */
#define XCAN_ECR_REC_MASK 0x0000FF00 /* Receive error counter */
#define XCAN_ECR_TEC_MASK 0x000000FF /* Transmit error counter */
#define XCAN_ESR_ACKER_MASK 0x00000010 /* ACK error */
......@@ -83,6 +104,7 @@ enum xcan_reg {
#define XCAN_SR_NORMAL_MASK 0x00000008 /* Normal mode */
#define XCAN_SR_LBACK_MASK 0x00000002 /* Loop back mode */
#define XCAN_SR_CONFIG_MASK 0x00000001 /* Configuration mode */
#define XCAN_IXR_RXMNF_MASK 0x00020000 /* RX match not finished */
#define XCAN_IXR_TXFEMP_MASK 0x00004000 /* TX FIFO Empty */
#define XCAN_IXR_WKUP_MASK 0x00000800 /* Wake up interrupt */
#define XCAN_IXR_SLP_MASK 0x00000400 /* Sleep interrupt */
......@@ -100,15 +122,15 @@ enum xcan_reg {
#define XCAN_IDR_ID2_MASK 0x0007FFFE /* Extended message ident */
#define XCAN_IDR_RTR_MASK 0x00000001 /* Remote TX request */
#define XCAN_DLCR_DLC_MASK 0xF0000000 /* Data length code */
#define XCAN_INTR_ALL (XCAN_IXR_TXOK_MASK | XCAN_IXR_BSOFF_MASK |\
XCAN_IXR_WKUP_MASK | XCAN_IXR_SLP_MASK | \
XCAN_IXR_RXNEMP_MASK | XCAN_IXR_ERROR_MASK | \
XCAN_IXR_RXOFLW_MASK | XCAN_IXR_ARBLST_MASK)
#define XCAN_FSR_FL_MASK 0x00003F00 /* RX Fill Level */
#define XCAN_FSR_IRI_MASK 0x00000080 /* RX Increment Read Index */
#define XCAN_FSR_RI_MASK 0x0000001F /* RX Read Index */
/* CAN register bit shift - XCAN_<REG>_<BIT>_SHIFT */
#define XCAN_BTR_SJW_SHIFT 7 /* Synchronous jump width */
#define XCAN_BTR_TS2_SHIFT 4 /* Time segment 2 */
#define XCAN_BTR_SJW_SHIFT_CANFD 16 /* Synchronous jump width */
#define XCAN_BTR_TS2_SHIFT_CANFD 8 /* Time segment 2 */
#define XCAN_IDR_ID1_SHIFT 21 /* Standard Messg Identifier */
#define XCAN_IDR_ID2_SHIFT 1 /* Extended Message Identifier */
#define XCAN_DLCR_DLC_SHIFT 28 /* Data length code */
......@@ -118,6 +140,27 @@ enum xcan_reg {
#define XCAN_FRAME_MAX_DATA_LEN 8
#define XCAN_TIMEOUT (1 * HZ)
/* TX-FIFO-empty interrupt available */
#define XCAN_FLAG_TXFEMP 0x0001
/* RX Match Not Finished interrupt available */
#define XCAN_FLAG_RXMNF 0x0002
/* Extended acceptance filters with control at 0xE0 */
#define XCAN_FLAG_EXT_FILTERS 0x0004
/* TX mailboxes instead of TX FIFO */
#define XCAN_FLAG_TX_MAILBOXES 0x0008
/* RX FIFO with each buffer in separate registers at 0x1100
* instead of the regular FIFO at 0x50
*/
#define XCAN_FLAG_RX_FIFO_MULTI 0x0010
struct xcan_devtype_data {
unsigned int flags;
const struct can_bittiming_const *bittiming_const;
const char *bus_clk_name;
unsigned int btr_ts2_shift;
unsigned int btr_sjw_shift;
};
/**
* struct xcan_priv - This definition define CAN driver instance
* @can: CAN private data structure.
......@@ -133,6 +176,7 @@ enum xcan_reg {
* @irq_flags: For request_irq()
* @bus_clk: Pointer to struct clk
* @can_clk: Pointer to struct clk
* @devtype: Device type specific constants
*/
struct xcan_priv {
struct can_priv can;
......@@ -149,6 +193,7 @@ struct xcan_priv {
unsigned long irq_flags;
struct clk *bus_clk;
struct clk *can_clk;
struct xcan_devtype_data devtype;
};
/* CAN Bittiming constants as per Xilinx CAN specs */
......@@ -164,9 +209,16 @@ static const struct can_bittiming_const xcan_bittiming_const = {
.brp_inc = 1,
};
#define XCAN_CAP_WATERMARK 0x0001
struct xcan_devtype_data {
unsigned int caps;
static const struct can_bittiming_const xcan_bittiming_const_canfd = {
.name = DRIVER_NAME,
.tseg1_min = 1,
.tseg1_max = 64,
.tseg2_min = 1,
.tseg2_max = 16,
.sjw_max = 16,
.brp_min = 1,
.brp_max = 256,
.brp_inc = 1,
};
/**
......@@ -223,6 +275,23 @@ static u32 xcan_read_reg_be(const struct xcan_priv *priv, enum xcan_reg reg)
return ioread32be(priv->reg_base + reg);
}
/**
* xcan_rx_int_mask - Get the mask for the receive interrupt
* @priv: Driver private data structure
*
* Return: The receive interrupt mask used by the driver on this HW
*/
static u32 xcan_rx_int_mask(const struct xcan_priv *priv)
{
/* RXNEMP is better suited for our use case as it cannot be cleared
* while the FIFO is non-empty, but CAN FD HW does not have it
*/
if (priv->devtype.flags & XCAN_FLAG_RX_FIFO_MULTI)
return XCAN_IXR_RXOK_MASK;
else
return XCAN_IXR_RXNEMP_MASK;
}
/**
* set_reset_mode - Resets the CAN device mode
* @ndev: Pointer to net_device structure
......@@ -287,10 +356,10 @@ static int xcan_set_bittiming(struct net_device *ndev)
btr1 = (bt->prop_seg + bt->phase_seg1 - 1);
/* Setting Time Segment 2 in BTR Register */
btr1 |= (bt->phase_seg2 - 1) << XCAN_BTR_TS2_SHIFT;
btr1 |= (bt->phase_seg2 - 1) << priv->devtype.btr_ts2_shift;
/* Setting Synchronous jump width in BTR Register */
btr1 |= (bt->sjw - 1) << XCAN_BTR_SJW_SHIFT;
btr1 |= (bt->sjw - 1) << priv->devtype.btr_sjw_shift;
priv->write_reg(priv, XCAN_BRPR_OFFSET, btr0);
priv->write_reg(priv, XCAN_BTR_OFFSET, btr1);
......@@ -318,6 +387,7 @@ static int xcan_chip_start(struct net_device *ndev)
u32 reg_msr, reg_sr_mask;
int err;
unsigned long timeout;
u32 ier;
/* Check if it is in reset mode */
err = set_reset_mode(ndev);
......@@ -329,7 +399,15 @@ static int xcan_chip_start(struct net_device *ndev)
return err;
/* Enable interrupts */
priv->write_reg(priv, XCAN_IER_OFFSET, XCAN_INTR_ALL);
ier = XCAN_IXR_TXOK_MASK | XCAN_IXR_BSOFF_MASK |
XCAN_IXR_WKUP_MASK | XCAN_IXR_SLP_MASK |
XCAN_IXR_ERROR_MASK | XCAN_IXR_RXOFLW_MASK |
XCAN_IXR_ARBLST_MASK | xcan_rx_int_mask(priv);
if (priv->devtype.flags & XCAN_FLAG_RXMNF)
ier |= XCAN_IXR_RXMNF_MASK;
priv->write_reg(priv, XCAN_IER_OFFSET, ier);
/* Check whether it is loopback mode or normal mode */
if (priv->can.ctrlmode & CAN_CTRLMODE_LOOPBACK) {
......@@ -340,6 +418,12 @@ static int xcan_chip_start(struct net_device *ndev)
reg_sr_mask = XCAN_SR_NORMAL_MASK;
}
/* enable the first extended filter, if any, as cores with extended
* filtering default to non-receipt if all filters are disabled
*/
if (priv->devtype.flags & XCAN_FLAG_EXT_FILTERS)
priv->write_reg(priv, XCAN_AFR_EXT_OFFSET, 0x00000001);
priv->write_reg(priv, XCAN_MSR_OFFSET, reg_msr);
priv->write_reg(priv, XCAN_SRR_OFFSET, XCAN_SRR_CEN_MASK);
......@@ -390,34 +474,15 @@ static int xcan_do_set_mode(struct net_device *ndev, enum can_mode mode)
}
/**
* xcan_start_xmit - Starts the transmission
* @skb: sk_buff pointer that contains data to be Txed
* @ndev: Pointer to net_device structure
*
* This function is invoked from upper layers to initiate transmission. This
* function uses the next available free txbuff and populates their fields to
* start the transmission.
*
* Return: 0 on success and failure value on error
* xcan_write_frame - Write a frame to HW
* @skb: sk_buff pointer that contains data to be Txed
* @frame_offset: Register offset to write the frame to
*/
static int xcan_start_xmit(struct sk_buff *skb, struct net_device *ndev)
static void xcan_write_frame(struct xcan_priv *priv, struct sk_buff *skb,
int frame_offset)
{
struct xcan_priv *priv = netdev_priv(ndev);
struct net_device_stats *stats = &ndev->stats;
struct can_frame *cf = (struct can_frame *)skb->data;
u32 id, dlc, data[2] = {0, 0};
unsigned long flags;
if (can_dropped_invalid_skb(ndev, skb))
return NETDEV_TX_OK;
/* Check if the TX buffer is full */
if (unlikely(priv->read_reg(priv, XCAN_SR_OFFSET) &
XCAN_SR_TXFLL_MASK)) {
netif_stop_queue(ndev);
netdev_err(ndev, "BUG!, TX FIFO full when queue awake!\n");
return NETDEV_TX_BUSY;
}
struct can_frame *cf = (struct can_frame *)skb->data;
/* Watch carefully on the bit sequence */
if (cf->can_id & CAN_EFF_FLAG) {
......@@ -453,24 +518,44 @@ static int xcan_start_xmit(struct sk_buff *skb, struct net_device *ndev)
if (cf->can_dlc > 4)
data[1] = be32_to_cpup((__be32 *)(cf->data + 4));
priv->write_reg(priv, XCAN_FRAME_ID_OFFSET(frame_offset), id);
/* If the CAN frame is RTR frame this write triggers transmission
* (not on CAN FD)
*/
priv->write_reg(priv, XCAN_FRAME_DLC_OFFSET(frame_offset), dlc);
if (!(cf->can_id & CAN_RTR_FLAG)) {
priv->write_reg(priv, XCAN_FRAME_DW1_OFFSET(frame_offset),
data[0]);
/* If the CAN frame is Standard/Extended frame this
* write triggers transmission (not on CAN FD)
*/
priv->write_reg(priv, XCAN_FRAME_DW2_OFFSET(frame_offset),
data[1]);
}
}
/**
* xcan_start_xmit_fifo - Starts the transmission (FIFO mode)
*
* Return: 0 on success, -ENOSPC if FIFO is full.
*/
static int xcan_start_xmit_fifo(struct sk_buff *skb, struct net_device *ndev)
{
struct xcan_priv *priv = netdev_priv(ndev);
unsigned long flags;
/* Check if the TX buffer is full */
if (unlikely(priv->read_reg(priv, XCAN_SR_OFFSET) &
XCAN_SR_TXFLL_MASK))
return -ENOSPC;
can_put_echo_skb(skb, ndev, priv->tx_head % priv->tx_max);
spin_lock_irqsave(&priv->tx_lock, flags);
priv->tx_head++;
/* Write the Frame to Xilinx CAN TX FIFO */
priv->write_reg(priv, XCAN_TXFIFO_ID_OFFSET, id);
/* If the CAN frame is RTR frame this write triggers tranmission */
priv->write_reg(priv, XCAN_TXFIFO_DLC_OFFSET, dlc);
if (!(cf->can_id & CAN_RTR_FLAG)) {
priv->write_reg(priv, XCAN_TXFIFO_DW1_OFFSET, data[0]);
/* If the CAN frame is Standard/Extended frame this
* write triggers tranmission
*/
priv->write_reg(priv, XCAN_TXFIFO_DW2_OFFSET, data[1]);
stats->tx_bytes += cf->can_dlc;
}
xcan_write_frame(priv, skb, XCAN_TXFIFO_OFFSET);
/* Clear TX-FIFO-empty interrupt for xcan_tx_interrupt() */
if (priv->tx_max > 1)
......@@ -482,6 +567,70 @@ static int xcan_start_xmit(struct sk_buff *skb, struct net_device *ndev)
spin_unlock_irqrestore(&priv->tx_lock, flags);
return 0;
}
/**
* xcan_start_xmit_mailbox - Starts the transmission (mailbox mode)
*
* Return: 0 on success, -ENOSPC if there is no space
*/
static int xcan_start_xmit_mailbox(struct sk_buff *skb, struct net_device *ndev)
{
struct xcan_priv *priv = netdev_priv(ndev);
unsigned long flags;
if (unlikely(priv->read_reg(priv, XCAN_TRR_OFFSET) &
BIT(XCAN_TX_MAILBOX_IDX)))
return -ENOSPC;
can_put_echo_skb(skb, ndev, 0);
spin_lock_irqsave(&priv->tx_lock, flags);
priv->tx_head++;
xcan_write_frame(priv, skb,
XCAN_TXMSG_FRAME_OFFSET(XCAN_TX_MAILBOX_IDX));
/* Mark buffer as ready for transmit */
priv->write_reg(priv, XCAN_TRR_OFFSET, BIT(XCAN_TX_MAILBOX_IDX));
netif_stop_queue(ndev);
spin_unlock_irqrestore(&priv->tx_lock, flags);
return 0;
}
/**
* xcan_start_xmit - Starts the transmission
* @skb: sk_buff pointer that contains data to be Txed
* @ndev: Pointer to net_device structure
*
* This function is invoked from upper layers to initiate transmission.
*
* Return: NETDEV_TX_OK on success and NETDEV_TX_BUSY when the tx queue is full
*/
static int xcan_start_xmit(struct sk_buff *skb, struct net_device *ndev)
{
struct xcan_priv *priv = netdev_priv(ndev);
int ret;
if (can_dropped_invalid_skb(ndev, skb))
return NETDEV_TX_OK;
if (priv->devtype.flags & XCAN_FLAG_TX_MAILBOXES)
ret = xcan_start_xmit_mailbox(skb, ndev);
else
ret = xcan_start_xmit_fifo(skb, ndev);
if (ret < 0) {
netdev_err(ndev, "BUG!, TX full when queue awake!\n");
netif_stop_queue(ndev);
return NETDEV_TX_BUSY;
}
return NETDEV_TX_OK;
}
......@@ -489,13 +638,14 @@ static int xcan_start_xmit(struct sk_buff *skb, struct net_device *ndev)
* xcan_rx - Is called from CAN isr to complete the received
* frame processing
* @ndev: Pointer to net_device structure
* @frame_base: Register offset to the frame to be read
*
* This function is invoked from the CAN isr(poll) to process the Rx frames. It
* does minimal processing and invokes "netif_receive_skb" to complete further
* processing.
* Return: 1 on success and 0 on failure.
*/
static int xcan_rx(struct net_device *ndev)
static int xcan_rx(struct net_device *ndev, int frame_base)
{
struct xcan_priv *priv = netdev_priv(ndev);
struct net_device_stats *stats = &ndev->stats;
......@@ -510,9 +660,9 @@ static int xcan_rx(struct net_device *ndev)
}
/* Read a frame from Xilinx zynq CANPS */
id_xcan = priv->read_reg(priv, XCAN_RXFIFO_ID_OFFSET);
dlc = priv->read_reg(priv, XCAN_RXFIFO_DLC_OFFSET) >>
XCAN_DLCR_DLC_SHIFT;
id_xcan = priv->read_reg(priv, XCAN_FRAME_ID_OFFSET(frame_base));
dlc = priv->read_reg(priv, XCAN_FRAME_DLC_OFFSET(frame_base)) >>
XCAN_DLCR_DLC_SHIFT;
/* Change Xilinx CAN data length format to socketCAN data format */
cf->can_dlc = get_can_dlc(dlc);
......@@ -535,8 +685,8 @@ static int xcan_rx(struct net_device *ndev)
}
/* DW1/DW2 must always be read to remove message from RXFIFO */
data[0] = priv->read_reg(priv, XCAN_RXFIFO_DW1_OFFSET);
data[1] = priv->read_reg(priv, XCAN_RXFIFO_DW2_OFFSET);
data[0] = priv->read_reg(priv, XCAN_FRAME_DW1_OFFSET(frame_base));
data[1] = priv->read_reg(priv, XCAN_FRAME_DW2_OFFSET(frame_base));
if (!(cf->can_id & CAN_RTR_FLAG)) {
/* Change Xilinx CAN data format to socketCAN data format */
......@@ -594,39 +744,19 @@ static void xcan_set_error_state(struct net_device *ndev,
u32 ecr = priv->read_reg(priv, XCAN_ECR_OFFSET);
u32 txerr = ecr & XCAN_ECR_TEC_MASK;
u32 rxerr = (ecr & XCAN_ECR_REC_MASK) >> XCAN_ESR_REC_SHIFT;
enum can_state tx_state = txerr >= rxerr ? new_state : 0;
enum can_state rx_state = txerr <= rxerr ? new_state : 0;
/* non-ERROR states are handled elsewhere */
if (WARN_ON(new_state > CAN_STATE_ERROR_PASSIVE))
return;
priv->can.state = new_state;
can_change_state(ndev, cf, tx_state, rx_state);
if (cf) {
cf->can_id |= CAN_ERR_CRTL;
cf->data[6] = txerr;
cf->data[7] = rxerr;
}
switch (new_state) {
case CAN_STATE_ERROR_PASSIVE:
priv->can.can_stats.error_passive++;
if (cf)
cf->data[1] = (rxerr > 127) ?
CAN_ERR_CRTL_RX_PASSIVE :
CAN_ERR_CRTL_TX_PASSIVE;
break;
case CAN_STATE_ERROR_WARNING:
priv->can.can_stats.error_warning++;
if (cf)
cf->data[1] |= (txerr > rxerr) ?
CAN_ERR_CRTL_TX_WARNING :
CAN_ERR_CRTL_RX_WARNING;
break;
case CAN_STATE_ERROR_ACTIVE:
if (cf)
cf->data[1] |= CAN_ERR_CRTL_ACTIVE;
break;
default:
/* non-ERROR states are handled elsewhere */
WARN_ON(1);
break;
}
}
/**
......@@ -703,7 +833,8 @@ static void xcan_err_interrupt(struct net_device *ndev, u32 isr)
} else {
enum can_state new_state = xcan_current_error_state(ndev);
xcan_set_error_state(ndev, new_state, skb ? cf : NULL);
if (new_state != priv->can.state)
xcan_set_error_state(ndev, new_state, skb ? cf : NULL);
}
/* Check for Arbitration lost interrupt */
......@@ -725,6 +856,17 @@ static void xcan_err_interrupt(struct net_device *ndev, u32 isr)
}
}
/* Check for RX Match Not Finished interrupt */
if (isr & XCAN_IXR_RXMNF_MASK) {
stats->rx_dropped++;
stats->rx_errors++;
netdev_err(ndev, "RX match not finished, frame discarded\n");
if (skb) {
cf->can_id |= CAN_ERR_CRTL;
cf->data[1] |= CAN_ERR_CRTL_UNSPEC;
}
}
/* Check for error interrupt */
if (isr & XCAN_IXR_ERROR_MASK) {
if (skb)
......@@ -808,6 +950,44 @@ static void xcan_state_interrupt(struct net_device *ndev, u32 isr)
priv->can.state = CAN_STATE_ERROR_ACTIVE;
}
/**
* xcan_rx_fifo_get_next_frame - Get register offset of next RX frame
*
* Return: Register offset of the next frame in RX FIFO.
*/
static int xcan_rx_fifo_get_next_frame(struct xcan_priv *priv)
{
int offset;
if (priv->devtype.flags & XCAN_FLAG_RX_FIFO_MULTI) {
u32 fsr;
/* clear RXOK before the is-empty check so that any newly
* received frame will reassert it without a race
*/
priv->write_reg(priv, XCAN_ICR_OFFSET, XCAN_IXR_RXOK_MASK);
fsr = priv->read_reg(priv, XCAN_FSR_OFFSET);
/* check if RX FIFO is empty */
if (!(fsr & XCAN_FSR_FL_MASK))
return -ENOENT;
offset = XCAN_RXMSG_FRAME_OFFSET(fsr & XCAN_FSR_RI_MASK);
} else {
/* check if RX FIFO is empty */
if (!(priv->read_reg(priv, XCAN_ISR_OFFSET) &
XCAN_IXR_RXNEMP_MASK))
return -ENOENT;
/* frames are read from a static offset */
offset = XCAN_RXFIFO_OFFSET;
}
return offset;
}
/**
* xcan_rx_poll - Poll routine for rx packets (NAPI)
* @napi: napi structure pointer
......@@ -822,14 +1002,24 @@ static int xcan_rx_poll(struct napi_struct *napi, int quota)
{
struct net_device *ndev = napi->dev;
struct xcan_priv *priv = netdev_priv(ndev);
u32 isr, ier;
u32 ier;
int work_done = 0;
isr = priv->read_reg(priv, XCAN_ISR_OFFSET);
while ((isr & XCAN_IXR_RXNEMP_MASK) && (work_done < quota)) {
work_done += xcan_rx(ndev);
priv->write_reg(priv, XCAN_ICR_OFFSET, XCAN_IXR_RXNEMP_MASK);
isr = priv->read_reg(priv, XCAN_ISR_OFFSET);
int frame_offset;
while ((frame_offset = xcan_rx_fifo_get_next_frame(priv)) >= 0 &&
(work_done < quota)) {
work_done += xcan_rx(ndev, frame_offset);
if (priv->devtype.flags & XCAN_FLAG_RX_FIFO_MULTI)
/* increment read index */
priv->write_reg(priv, XCAN_FSR_OFFSET,
XCAN_FSR_IRI_MASK);
else
/* clear rx-not-empty (will actually clear only if
* empty)
*/
priv->write_reg(priv, XCAN_ICR_OFFSET,
XCAN_IXR_RXNEMP_MASK);
}
if (work_done) {
......@@ -840,7 +1030,7 @@ static int xcan_rx_poll(struct napi_struct *napi, int quota)
if (work_done < quota) {
napi_complete_done(napi, work_done);
ier = priv->read_reg(priv, XCAN_IER_OFFSET);
ier |= XCAN_IXR_RXNEMP_MASK;
ier |= xcan_rx_int_mask(priv);
priv->write_reg(priv, XCAN_IER_OFFSET, ier);
}
return work_done;
......@@ -908,8 +1098,8 @@ static void xcan_tx_interrupt(struct net_device *ndev, u32 isr)
}
while (frames_sent--) {
can_get_echo_skb(ndev, priv->tx_tail %
priv->tx_max);
stats->tx_bytes += can_get_echo_skb(ndev, priv->tx_tail %
priv->tx_max);
priv->tx_tail++;
stats->tx_packets++;
}
......@@ -939,6 +1129,7 @@ static irqreturn_t xcan_interrupt(int irq, void *dev_id)
struct xcan_priv *priv = netdev_priv(ndev);
u32 isr, ier;
u32 isr_errors;
u32 rx_int_mask = xcan_rx_int_mask(priv);
/* Get the interrupt status from Xilinx CAN */
isr = priv->read_reg(priv, XCAN_ISR_OFFSET);
......@@ -958,16 +1149,17 @@ static irqreturn_t xcan_interrupt(int irq, void *dev_id)
/* Check for the type of error interrupt and Processing it */
isr_errors = isr & (XCAN_IXR_ERROR_MASK | XCAN_IXR_RXOFLW_MASK |
XCAN_IXR_BSOFF_MASK | XCAN_IXR_ARBLST_MASK);
XCAN_IXR_BSOFF_MASK | XCAN_IXR_ARBLST_MASK |
XCAN_IXR_RXMNF_MASK);
if (isr_errors) {
priv->write_reg(priv, XCAN_ICR_OFFSET, isr_errors);
xcan_err_interrupt(ndev, isr);
}
/* Check for the type of receive interrupt and Processing it */
if (isr & XCAN_IXR_RXNEMP_MASK) {
if (isr & rx_int_mask) {
ier = priv->read_reg(priv, XCAN_IER_OFFSET);
ier &= ~XCAN_IXR_RXNEMP_MASK;
ier &= ~rx_int_mask;
priv->write_reg(priv, XCAN_IER_OFFSET, ier);
napi_schedule(&priv->napi);
}
......@@ -1214,13 +1406,35 @@ static const struct dev_pm_ops xcan_dev_pm_ops = {
};
static const struct xcan_devtype_data xcan_zynq_data = {
.caps = XCAN_CAP_WATERMARK,
.bittiming_const = &xcan_bittiming_const,
.btr_ts2_shift = XCAN_BTR_TS2_SHIFT,
.btr_sjw_shift = XCAN_BTR_SJW_SHIFT,
.bus_clk_name = "pclk",
};
static const struct xcan_devtype_data xcan_axi_data = {
.bittiming_const = &xcan_bittiming_const,
.btr_ts2_shift = XCAN_BTR_TS2_SHIFT,
.btr_sjw_shift = XCAN_BTR_SJW_SHIFT,
.bus_clk_name = "s_axi_aclk",
};
static const struct xcan_devtype_data xcan_canfd_data = {
.flags = XCAN_FLAG_EXT_FILTERS |
XCAN_FLAG_RXMNF |
XCAN_FLAG_TX_MAILBOXES |
XCAN_FLAG_RX_FIFO_MULTI,
.bittiming_const = &xcan_bittiming_const,
.btr_ts2_shift = XCAN_BTR_TS2_SHIFT_CANFD,
.btr_sjw_shift = XCAN_BTR_SJW_SHIFT_CANFD,
.bus_clk_name = "s_axi_aclk",
};
/* Match table for OF platform binding */
static const struct of_device_id xcan_of_match[] = {
{ .compatible = "xlnx,zynq-can-1.0", .data = &xcan_zynq_data },
{ .compatible = "xlnx,axi-can-1.00.a", },
{ .compatible = "xlnx,axi-can-1.00.a", .data = &xcan_axi_data },
{ .compatible = "xlnx,canfd-1.0", .data = &xcan_canfd_data },
{ /* end of list */ },
};
MODULE_DEVICE_TABLE(of, xcan_of_match);
......@@ -1240,9 +1454,12 @@ static int xcan_probe(struct platform_device *pdev)
struct net_device *ndev;
struct xcan_priv *priv;
const struct of_device_id *of_id;
int caps = 0;
const struct xcan_devtype_data *devtype = &xcan_axi_data;
void __iomem *addr;
int ret, rx_max, tx_max, tx_fifo_depth;
int ret;
int rx_max, tx_max;
int hw_tx_max, hw_rx_max;
const char *hw_tx_max_property;
/* Get the virtual base address for the device */
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
......@@ -1252,25 +1469,33 @@ static int xcan_probe(struct platform_device *pdev)
goto err;
}
ret = of_property_read_u32(pdev->dev.of_node, "tx-fifo-depth",
&tx_fifo_depth);
if (ret < 0)
goto err;
of_id = of_match_device(xcan_of_match, &pdev->dev);
if (of_id && of_id->data)
devtype = of_id->data;
ret = of_property_read_u32(pdev->dev.of_node, "rx-fifo-depth", &rx_max);
if (ret < 0)
goto err;
hw_tx_max_property = devtype->flags & XCAN_FLAG_TX_MAILBOXES ?
"tx-mailbox-count" : "tx-fifo-depth";
of_id = of_match_device(xcan_of_match, &pdev->dev);
if (of_id) {
const struct xcan_devtype_data *devtype_data = of_id->data;
ret = of_property_read_u32(pdev->dev.of_node, hw_tx_max_property,
&hw_tx_max);
if (ret < 0) {
dev_err(&pdev->dev, "missing %s property\n",
hw_tx_max_property);
goto err;
}
if (devtype_data)
caps = devtype_data->caps;
ret = of_property_read_u32(pdev->dev.of_node, "rx-fifo-depth",
&hw_rx_max);
if (ret < 0) {
dev_err(&pdev->dev,
"missing rx-fifo-depth property (mailbox mode is not supported)\n");
goto err;
}
/* There is no way to directly figure out how many frames have been
* sent when the TXOK interrupt is processed. If watermark programming
/* With TX FIFO:
*
* There is no way to directly figure out how many frames have been
* sent when the TXOK interrupt is processed. If TXFEMP
* is supported, we can have 2 frames in the FIFO and use TXFEMP
* to determine if 1 or 2 frames have been sent.
* Theoretically we should be able to use TXFWMEMP to determine up
......@@ -1279,12 +1504,20 @@ static int xcan_probe(struct platform_device *pdev)
* than 2 frames in FIFO) is set anyway with no TXOK (a frame was
* sent), which is not a sensible state - possibly TXFWMEMP is not
* completely synchronized with the rest of the bits?
*
* With TX mailboxes:
*
* HW sends frames in CAN ID priority order. To preserve FIFO ordering
* we submit frames one at a time.
*/
if (caps & XCAN_CAP_WATERMARK)
tx_max = min(tx_fifo_depth, 2);
if (!(devtype->flags & XCAN_FLAG_TX_MAILBOXES) &&
(devtype->flags & XCAN_FLAG_TXFEMP))
tx_max = min(hw_tx_max, 2);
else
tx_max = 1;
rx_max = hw_rx_max;
/* Create a CAN device instance */
ndev = alloc_candev(sizeof(struct xcan_priv), tx_max);
if (!ndev)
......@@ -1292,13 +1525,14 @@ static int xcan_probe(struct platform_device *pdev)
priv = netdev_priv(ndev);
priv->dev = &pdev->dev;
priv->can.bittiming_const = &xcan_bittiming_const;
priv->can.bittiming_const = devtype->bittiming_const;
priv->can.do_set_mode = xcan_do_set_mode;
priv->can.do_get_berr_counter = xcan_get_berr_counter;
priv->can.ctrlmode_supported = CAN_CTRLMODE_LOOPBACK |
CAN_CTRLMODE_BERR_REPORTING;
priv->reg_base = addr;
priv->tx_max = tx_max;
priv->devtype = *devtype;
spin_lock_init(&priv->tx_lock);
/* Get IRQ for the device */
......@@ -1316,22 +1550,12 @@ static int xcan_probe(struct platform_device *pdev)
ret = PTR_ERR(priv->can_clk);
goto err_free;
}
/* Check for type of CAN device */
if (of_device_is_compatible(pdev->dev.of_node,
"xlnx,zynq-can-1.0")) {
priv->bus_clk = devm_clk_get(&pdev->dev, "pclk");
if (IS_ERR(priv->bus_clk)) {
dev_err(&pdev->dev, "bus clock not found\n");
ret = PTR_ERR(priv->bus_clk);
goto err_free;
}
} else {
priv->bus_clk = devm_clk_get(&pdev->dev, "s_axi_aclk");
if (IS_ERR(priv->bus_clk)) {
dev_err(&pdev->dev, "bus clock not found\n");
ret = PTR_ERR(priv->bus_clk);
goto err_free;
}
priv->bus_clk = devm_clk_get(&pdev->dev, devtype->bus_clk_name);
if (IS_ERR(priv->bus_clk)) {
dev_err(&pdev->dev, "bus clock not found\n");
ret = PTR_ERR(priv->bus_clk);
goto err_free;
}
priv->write_reg = xcan_write_reg_le;
......@@ -1364,9 +1588,9 @@ static int xcan_probe(struct platform_device *pdev)
pm_runtime_put(&pdev->dev);
netdev_dbg(ndev, "reg_base=0x%p irq=%d clock=%d, tx fifo depth: actual %d, using %d\n",
priv->reg_base, ndev->irq, priv->can.clock.freq,
tx_fifo_depth, priv->tx_max);
netdev_dbg(ndev, "reg_base=0x%p irq=%d clock=%d, tx buffers: actual %d, using %d\n",
priv->reg_base, ndev->irq, priv->can.clock.freq,
hw_tx_max, priv->tx_max);
return 0;
......
......@@ -143,7 +143,12 @@ u8 can_dlc2len(u8 can_dlc);
/* map the sanitized data length to an appropriate data length code */
u8 can_len2dlc(u8 len);
struct net_device *alloc_candev(int sizeof_priv, unsigned int echo_skb_max);
struct net_device *alloc_candev_mqs(int sizeof_priv, unsigned int echo_skb_max,
unsigned int txqs, unsigned int rxqs);
#define alloc_candev(sizeof_priv, echo_skb_max) \
alloc_candev_mqs(sizeof_priv, echo_skb_max, 1, 1)
#define alloc_candev_mq(sizeof_priv, echo_skb_max, count) \
alloc_candev_mqs(sizeof_priv, echo_skb_max, count, count)
void free_candev(struct net_device *dev);
/* a candev safe wrapper around netdev_priv */
......
......@@ -77,7 +77,7 @@ typedef __u32 canid_t;
/*
* Controller Area Network Error Message Frame Mask structure
*
* bit 0-28 : error class mask (see include/linux/can/error.h)
* bit 0-28 : error class mask (see include/uapi/linux/can/error.h)
* bit 29-31 : set to zero
*/
typedef __u32 can_err_mask_t;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment