Commit 8ef988b9 authored by David S. Miller's avatar David S. Miller

Merge branch 'NXP-SJA1105-DSA-driver'

Vladimir Oltean says:

====================
NXP SJA1105 DSA driver

This patchset adds a DSA driver for the SPI-controlled NXP SJA1105
switch.  Due to the hardware's unfriendliness, most of its state needs
to be shadowed in kernel memory by the driver. To support this and keep
a decent amount of cleanliness in the code, a new generic API for
converting between CPU-accessible ("unpacked") structures and
hardware-accessible ("packed") structures is proposed and used.

The driver is GPL-2.0 licensed. The source code files which are licensed
as BSD-3-Clause are hardware support files and derivative of the
userspace NXP sja1105-tool program, which is BSD-3-Clause licensed.

TODO items:
* Add support for traffic.
* Add full support for the P/Q/R/S series. The patches were mostly
  tested on a first-generation T device.
* Add timestamping support and PTP clock manipulation.
* Figure out how the tc-taprio hardware offload that was just proposed
  by Vinicius can be used to configure the switch's time-aware scheduler.
* Rework link state callbacks to use phylink once the SGMII port
  is supported.

Changes in v5:
1. Removed trailing empty lines at the end of files.
2. Moved the lib/packing.c file under a CONFIG_PACKING option instead of
   having it always built-in. The module is GPL licensed, which applies
   to its distribution in binary form, but the code is dual-licensed
   which means it can be used in projects with other licenses as well.
3. Made SJA1105 driver select CONFIG_PACKING and CONFIG_CRC32.

v4 patchset can be found at:
https://lwn.net/Articles/787077/

Changes in v4:
1. Previous patchset was broken apart, and for the moment the driver is
   configuring the switch as unmanaged. Support for regular and management
   traffic, as well as for PTP timestamping, will be submitted once the
   basic driver is accepted. Some core DSA patches were also broken out
   of the series, and are a dependency for this series:
   https://patchwork.ozlabs.org/project/netdev/list/?series=105069
2. Addressed Jiri Pirko's feedback about too generic function and macro
   naming.
3. Re-introduced ETH_P_DSA_8021Q.

v3 patchset can be found at:
https://lkml.org/lkml/2019/4/12/978

Changes in v3:
1. Removed the patch for a dedicated Ethertype to use with 802.1Q DSA
   tagging
2. Changed the SJA1105 switch tagging protocol sysfs label from
   "sja1105" to "8021q" to denote to users such as tcpdump that the
   structure is more generic.
3. Respun previous patch "net: dsa: Allow drivers to modulate between
   presence and absence of tagging". Current equivalent patch is called
   "net: dsa: Allow drivers to filter packets they can decode source
   port from" and at least allows reception of management traffic during
   the time when switch tagging is not enabled.
4. Added DSA-level fixes for the bridge core not unsetting
   vlan_filtering when ports leave. The global VLAN filtering is treated
   as a special case. Made the mt7530 driver use this. This patch
   benefits the SJA1105 because otherwise traffic in standalone mode
   would no longer work after removing the ports from a vlan_filtering
   bridge, since the driver and the hardware would be in an inconsistent
   state.
5. Restructured the documentation as rst. This depends upon the recently
   submitted "[PATCH net-next] Documentation: net: dsa: transition to
   the rst format": https://patchwork.ozlabs.org/patch/1084658/.

v2 patchset can be found at:
https://www.spinics.net/lists/netdev/msg563454.html

Changes in v2:
1. Device ID is no longer auto-detected but enforced based on explicit DT
   compatible string. This helps with stricter checking of DT bindings.
2. Group all device-specific operations into a sja1105_info structure and
   avoid using the IS_ET() and IS_PQRS() macros at runtime as much as possible.
3. Added more verbiage to commit messages and documentation.
4. Treat the case where RGMII internal delays are requested through DT bindings
   and return error.
5. Miscellaneous cosmetic cleanup in sja1105_clocking.c
6. Not advertising link features that are not supported, such as pause frames
   and the half duplex modes.
7. Fixed a mistake in previous patchset where the switch tagging was not
   actually enabled (lost during a rebase). This brought up another uncaught
   issue where switching at runtime between tagging and no-tagging was not
   supported by DSA. Fixed up the mistake in "net: dsa: sja1105: Add support
   for traffic through standalone ports", and added the new patch "net: dsa:
   Allow drivers to modulate between presence and absence of tagging" to
   address the other issue.
8. Added a workaround for switch resets cutting a frame in the middle of
   transmission, which would throw off some link partners.
9. Changed the TPID from ETH_P_EDSA (0xDADA) to a newly introduced one:
   ETH_P_DSA_8021Q (0xDADB). Uncovered another mistake in the previous patchset
   with a missing ntohs(), which was not caught because 0xDADA is
   endian-agnostic.
10. Made NET_DSA_TAG_8021Q select VLAN_8021Q
11. Renamed __dsa_port_vlan_add to dsa_port_vid_add and not to
    dsa_port_vlan_add_trans, as suggested, because the corresponding _del function
    does not have a transactional phase and the naming is more uniform this way.

v1 patchset can be found at:
https://www.spinics.net/lists/netdev/msg561589.html

Changes from RFC:
1. Removed the packing code for the static configuration tables that were
   not currently used
2. Removed the code for unpacking a static configuration structure from
   a memory buffer (not used)
3. Completely removed the SGMII stubs, since the configuration is not
   complete anyway.
4. Moved some code from the SJA1105 introduction commit into the patch
   that used it.
5. Made the code for checking global VLAN filtering generic and made b53
   driver use it.
6. Made mt7530 driver use the new generic dp->vlan_filtering
7. Fixed check for stringset in .get_sset_count
8. Minor cleanup in sja1105_clocking.c
9. Fixed a confusing typo in DSA

RFC can be found at:
https://www.mail-archive.com/netdev@vger.kernel.org/msg291717.html
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 8b952747 013fe01d
NXP SJA1105 switch driver
=========================
Required properties:
- compatible:
Must be one of:
- "nxp,sja1105e"
- "nxp,sja1105t"
- "nxp,sja1105p"
- "nxp,sja1105q"
- "nxp,sja1105r"
- "nxp,sja1105s"
Although the device ID could be detected at runtime, explicit bindings
are required in order to be able to statically check their validity.
For example, SGMII can only be specified on port 4 of R and S devices,
and the non-SGMII devices, while pin-compatible, are not equal in terms
of support for RGMII internal delays (supported on P/Q/R/S, but not on
E/T).
Optional properties:
- sja1105,role-mac:
- sja1105,role-phy:
Boolean properties that can be assigned under each port node. By
default (unless otherwise specified) a port is configured as MAC if it
is driving a PHY (phy-handle is present) or as PHY if it is PHY-less
(fixed-link specified, presumably because it is connected to a MAC).
The effect of this property (in either its implicit or explicit form)
is:
- In the case of MII or RMII it specifies whether the SJA1105 port is a
clock source or sink for this interface (not applicable for RGMII
where there is a Tx and an Rx clock).
- In the case of RGMII it affects the behavior regarding internal
delays:
1. If sja1105,role-mac is specified, and the phy-mode property is one
of "rgmii-id", "rgmii-txid" or "rgmii-rxid", then the entity
designated to apply the delay/clock skew necessary for RGMII
is the PHY. The SJA1105 MAC does not apply any internal delays.
2. If sja1105,role-phy is specified, and the phy-mode property is one
of the above, the designated entity to apply the internal delays
is the SJA1105 MAC (if hardware-supported). This is only supported
by the second-generation (P/Q/R/S) hardware. On a first-generation
E or T device, it is an error to specify an RGMII phy-mode other
than "rgmii" for a port that is in fixed-link mode. In that case,
the clock skew must either be added by the MAC at the other end of
the fixed-link, or by PCB serpentine traces on the board.
These properties are required, for example, in the case where SJA1105
ports are at both ends of a MII/RMII PHY-less setup. One end would need
to have sja1105,role-mac, while the other sja1105,role-phy.
See Documentation/devicetree/bindings/net/dsa/dsa.txt for the list of standard
DSA required and optional properties.
Other observations
------------------
The SJA1105 SPI interface requires a CS-to-CLK time (t2 in UM10944) of at least
one half of t_CLK. At an SPI frequency of 1MHz, this means a minimum
cs_sck_delay of 500ns. Ensuring that this SPI timing requirement is observed
depends on the SPI bus master driver.
Example
-------
Ethernet switch connected via SPI to the host, CPU port wired to enet2:
arch/arm/boot/dts/ls1021a-tsn.dts:
/* SPI controller of the LS1021 */
&dspi0 {
sja1105@1 {
reg = <0x1>;
#address-cells = <1>;
#size-cells = <0>;
compatible = "nxp,sja1105t";
spi-max-frequency = <4000000>;
fsl,spi-cs-sck-delay = <1000>;
fsl,spi-sck-cs-delay = <1000>;
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
/* ETH5 written on chassis */
label = "swp5";
phy-handle = <&rgmii_phy6>;
phy-mode = "rgmii-id";
reg = <0>;
/* Implicit "sja1105,role-mac;" */
};
port@1 {
/* ETH2 written on chassis */
label = "swp2";
phy-handle = <&rgmii_phy3>;
phy-mode = "rgmii-id";
reg = <1>;
/* Implicit "sja1105,role-mac;" */
};
port@2 {
/* ETH3 written on chassis */
label = "swp3";
phy-handle = <&rgmii_phy4>;
phy-mode = "rgmii-id";
reg = <2>;
/* Implicit "sja1105,role-mac;" */
};
port@3 {
/* ETH4 written on chassis */
phy-handle = <&rgmii_phy5>;
label = "swp4";
phy-mode = "rgmii-id";
reg = <3>;
/* Implicit "sja1105,role-mac;" */
};
port@4 {
/* Internal port connected to eth2 */
ethernet = <&enet2>;
phy-mode = "rgmii";
reg = <4>;
/* Implicit "sja1105,role-phy;" */
fixed-link {
speed = <1000>;
full-duplex;
};
};
};
};
};
/* MDIO controller of the LS1021 */
&mdio0 {
/* BCM5464 */
rgmii_phy3: ethernet-phy@3 {
reg = <0x3>;
};
rgmii_phy4: ethernet-phy@4 {
reg = <0x4>;
};
rgmii_phy5: ethernet-phy@5 {
reg = <0x5>;
};
rgmii_phy6: ethernet-phy@6 {
reg = <0x6>;
};
};
/* Ethernet master port of the LS1021 */
&enet2 {
phy-connection-type = "rgmii";
status = "ok";
fixed-link {
speed = <1000>;
full-duplex;
};
};
......@@ -8,3 +8,4 @@ Distributed Switch Architecture
dsa
bcm_sf2
lan9303
sja1105
=========================
NXP SJA1105 switch driver
=========================
Overview
========
The NXP SJA1105 is a family of 6 devices:
- SJA1105E: First generation, no TTEthernet
- SJA1105T: First generation, TTEthernet
- SJA1105P: Second generation, no TTEthernet, no SGMII
- SJA1105Q: Second generation, TTEthernet, no SGMII
- SJA1105R: Second generation, no TTEthernet, SGMII
- SJA1105S: Second generation, TTEthernet, SGMII
These are SPI-managed automotive switches, with all ports being gigabit
capable, and supporting MII/RMII/RGMII and optionally SGMII on one port.
Being automotive parts, their configuration interface is geared towards
set-and-forget use, with minimal dynamic interaction at runtime. They
require a static configuration to be composed by software and packed
with CRC and table headers, and sent over SPI.
The static configuration is composed of several configuration tables. Each
table takes a number of entries. Some configuration tables can be (partially)
reconfigured at runtime, some not. Some tables are mandatory, some not:
============================= ================== =============================
Table Mandatory Reconfigurable
============================= ================== =============================
Schedule no no
Schedule entry points if Scheduling no
VL Lookup no no
VL Policing if VL Lookup no
VL Forwarding if VL Lookup no
L2 Lookup no no
L2 Policing yes no
VLAN Lookup yes yes
L2 Forwarding yes partially (fully on P/Q/R/S)
MAC Config yes partially (fully on P/Q/R/S)
Schedule Params if Scheduling no
Schedule Entry Points Params if Scheduling no
VL Forwarding Params if VL Forwarding no
L2 Lookup Params no partially (fully on P/Q/R/S)
L2 Forwarding Params yes no
Clock Sync Params no no
AVB Params no no
General Params yes partially
Retagging no yes
xMII Params yes no
SGMII no yes
============================= ================== =============================
Also the configuration is write-only (software cannot read it back from the
switch except for very few exceptions).
The driver creates a static configuration at probe time, and keeps it at
all times in memory, as a shadow for the hardware state. When required to
change a hardware setting, the static configuration is also updated.
If that changed setting can be transmitted to the switch through the dynamic
reconfiguration interface, it is; otherwise the switch is reset and
reprogrammed with the updated static configuration.
Switching features
==================
The driver supports the configuration of L2 forwarding rules in hardware for
port bridging. The forwarding, broadcast and flooding domain between ports can
be restricted through two methods: either at the L2 forwarding level (isolate
one bridge's ports from another's) or at the VLAN port membership level
(isolate ports within the same bridge). The final forwarding decision taken by
the hardware is a logical AND of these two sets of rules.
The hardware tags all traffic internally with a port-based VLAN (pvid), or it
decodes the VLAN information from the 802.1Q tag. Advanced VLAN classification
is not possible. Once attributed a VLAN tag, frames are checked against the
port's membership rules and dropped at ingress if they don't match any VLAN.
This behavior is available when switch ports are enslaved to a bridge with
``vlan_filtering 1``.
Normally the hardware is not configurable with respect to VLAN awareness, but
by changing what TPID the switch searches 802.1Q tags for, the semantics of a
bridge with ``vlan_filtering 0`` can be kept (accept all traffic, tagged or
untagged), and therefore this mode is also supported.
Segregating the switch ports in multiple bridges is supported (e.g. 2 + 2), but
all bridges should have the same level of VLAN awareness (either both have
``vlan_filtering`` 0, or both 1). Also an inevitable limitation of the fact
that VLAN awareness is global at the switch level is that once a bridge with
``vlan_filtering`` enslaves at least one switch port, the other un-bridged
ports are no longer available for standalone traffic termination.
Device Tree bindings and board design
=====================================
This section references ``Documentation/devicetree/bindings/net/dsa/sja1105.txt``
and aims to showcase some potential switch caveats.
RMII PHY role and out-of-band signaling
---------------------------------------
In the RMII spec, the 50 MHz clock signals are either driven by the MAC or by
an external oscillator (but not by the PHY).
But the spec is rather loose and devices go outside it in several ways.
Some PHYs go against the spec and may provide an output pin where they source
the 50 MHz clock themselves, in an attempt to be helpful.
On the other hand, the SJA1105 is only binary configurable - when in the RMII
MAC role it will also attempt to drive the clock signal. To prevent this from
happening it must be put in RMII PHY role.
But doing so has some unintended consequences.
In the RMII spec, the PHY can transmit extra out-of-band signals via RXD[1:0].
These are practically some extra code words (/J/ and /K/) sent prior to the
preamble of each frame. The MAC does not have this out-of-band signaling
mechanism defined by the RMII spec.
So when the SJA1105 port is put in PHY role to avoid having 2 drivers on the
clock signal, inevitably an RMII PHY-to-PHY connection is created. The SJA1105
emulates a PHY interface fully and generates the /J/ and /K/ symbols prior to
frame preambles, which the real PHY is not expected to understand. So the PHY
simply encodes the extra symbols received from the SJA1105-as-PHY onto the
100Base-Tx wire.
On the other side of the wire, some link partners might discard these extra
symbols, while others might choke on them and discard the entire Ethernet
frames that follow along. This looks like packet loss with some link partners
but not with others.
The take-away is that in RMII mode, the SJA1105 must be let to drive the
reference clock if connected to a PHY.
RGMII fixed-link and internal delays
------------------------------------
As mentioned in the bindings document, the second generation of devices has
tunable delay lines as part of the MAC, which can be used to establish the
correct RGMII timing budget.
When powered up, these can shift the Rx and Tx clocks with a phase difference
between 73.8 and 101.7 degrees.
The catch is that the delay lines need to lock onto a clock signal with a
stable frequency. This means that there must be at least 2 microseconds of
silence between the clock at the old vs at the new frequency. Otherwise the
lock is lost and the delay lines must be reset (powered down and back up).
In RGMII the clock frequency changes with link speed (125 MHz at 1000 Mbps, 25
MHz at 100 Mbps and 2.5 MHz at 10 Mbps), and link speed might change during the
AN process.
In the situation where the switch port is connected through an RGMII fixed-link
to a link partner whose link state life cycle is outside the control of Linux
(such as a different SoC), then the delay lines would remain unlocked (and
inactive) until there is manual intervention (ifdown/ifup on the switch port).
The take-away is that in RGMII mode, the switch's internal delays are only
reliable if the link partner never changes link speeds, or if it does, it does
so in a way that is coordinated with the switch port (practically, both ends of
the fixed-link are under control of the same Linux system).
As to why would a fixed-link interface ever change link speeds: there are
Ethernet controllers out there which come out of reset in 100 Mbps mode, and
their driver inevitably needs to change the speed and clock frequency if it's
required to work at gigabit.
MDIO bus and PHY management
---------------------------
The SJA1105 does not have an MDIO bus and does not perform in-band AN either.
Therefore there is no link state notification coming from the switch device.
A board would need to hook up the PHYs connected to the switch to any other
MDIO bus available to Linux within the system (e.g. to the DSA master's MDIO
bus). Link state management then works by the driver manually keeping in sync
(over SPI commands) the MAC link speed with the settings negotiated by the PHY.
================================================
Generic bitfield packing and unpacking functions
================================================
Problem statement
-----------------
When working with hardware, one has to choose between several approaches of
interfacing with it.
One can memory-map a pointer to a carefully crafted struct over the hardware
device's memory region, and access its fields as struct members (potentially
declared as bitfields). But writing code this way would make it less portable,
due to potential endianness mismatches between the CPU and the hardware device.
Additionally, one has to pay close attention when translating register
definitions from the hardware documentation into bit field indices for the
structs. Also, some hardware (typically networking equipment) tends to group
its register fields in ways that violate any reasonable word boundaries
(sometimes even 64 bit ones). This creates the inconvenience of having to
define "high" and "low" portions of register fields within the struct.
A more robust alternative to struct field definitions would be to extract the
required fields by shifting the appropriate number of bits. But this would
still not protect from endianness mismatches, except if all memory accesses
were performed byte-by-byte. Also the code can easily get cluttered, and the
high-level idea might get lost among the many bit shifts required.
Many drivers take the bit-shifting approach and then attempt to reduce the
clutter with tailored macros, but more often than not these macros take
shortcuts that still prevent the code from being truly portable.
The solution
------------
This API deals with 2 basic operations:
- Packing a CPU-usable number into a memory buffer (with hardware
constraints/quirks)
- Unpacking a memory buffer (which has hardware constraints/quirks)
into a CPU-usable number.
The API offers an abstraction over said hardware constraints and quirks,
over CPU endianness and therefore between possible mismatches between
the two.
The basic unit of these API functions is the u64. From the CPU's
perspective, bit 63 always means bit offset 7 of byte 7, albeit only
logically. The question is: where do we lay this bit out in memory?
The following examples cover the memory layout of a packed u64 field.
The byte offsets in the packed buffer are always implicitly 0, 1, ... 7.
What the examples show is where the logical bytes and bits sit.
1. Normally (no quirks), we would do it like this:
63 62 61 60 59 58 57 56 55 54 53 52 51 50 49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32
7 6 5 4
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
3 2 1 0
That is, the MSByte (7) of the CPU-usable u64 sits at memory offset 0, and the
LSByte (0) of the u64 sits at memory offset 7.
This corresponds to what most folks would regard to as "big endian", where
bit i corresponds to the number 2^i. This is also referred to in the code
comments as "logical" notation.
2. If QUIRK_MSB_ON_THE_RIGHT is set, we do it like this:
56 57 58 59 60 61 62 63 48 49 50 51 52 53 54 55 40 41 42 43 44 45 46 47 32 33 34 35 36 37 38 39
7 6 5 4
24 25 26 27 28 29 30 31 16 17 18 19 20 21 22 23 8 9 10 11 12 13 14 15 0 1 2 3 4 5 6 7
3 2 1 0
That is, QUIRK_MSB_ON_THE_RIGHT does not affect byte positioning, but
inverts bit offsets inside a byte.
3. If QUIRK_LITTLE_ENDIAN is set, we do it like this:
39 38 37 36 35 34 33 32 47 46 45 44 43 42 41 40 55 54 53 52 51 50 49 48 63 62 61 60 59 58 57 56
4 5 6 7
7 6 5 4 3 2 1 0 15 14 13 12 11 10 9 8 23 22 21 20 19 18 17 16 31 30 29 28 27 26 25 24
0 1 2 3
Therefore, QUIRK_LITTLE_ENDIAN means that inside the memory region, every
byte from each 4-byte word is placed at its mirrored position compared to
the boundary of that word.
4. If QUIRK_MSB_ON_THE_RIGHT and QUIRK_LITTLE_ENDIAN are both set, we do it
like this:
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
4 5 6 7
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
0 1 2 3
5. If just QUIRK_LSW32_IS_FIRST is set, we do it like this:
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
3 2 1 0
63 62 61 60 59 58 57 56 55 54 53 52 51 50 49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32
7 6 5 4
In this case the 8 byte memory region is interpreted as follows: first
4 bytes correspond to the least significant 4-byte word, next 4 bytes to
the more significant 4-byte word.
6. If QUIRK_LSW32_IS_FIRST and QUIRK_MSB_ON_THE_RIGHT are set, we do it like
this:
24 25 26 27 28 29 30 31 16 17 18 19 20 21 22 23 8 9 10 11 12 13 14 15 0 1 2 3 4 5 6 7
3 2 1 0
56 57 58 59 60 61 62 63 48 49 50 51 52 53 54 55 40 41 42 43 44 45 46 47 32 33 34 35 36 37 38 39
7 6 5 4
7. If QUIRK_LSW32_IS_FIRST and QUIRK_LITTLE_ENDIAN are set, it looks like
this:
7 6 5 4 3 2 1 0 15 14 13 12 11 10 9 8 23 22 21 20 19 18 17 16 31 30 29 28 27 26 25 24
0 1 2 3
39 38 37 36 35 34 33 32 47 46 45 44 43 42 41 40 55 54 53 52 51 50 49 48 63 62 61 60 59 58 57 56
4 5 6 7
8. If QUIRK_LSW32_IS_FIRST, QUIRK_LITTLE_ENDIAN and QUIRK_MSB_ON_THE_RIGHT
are set, it looks like this:
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
0 1 2 3
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
4 5 6 7
We always think of our offsets as if there were no quirk, and we translate
them afterwards, before accessing the memory region.
Intended use
------------
Drivers that opt to use this API first need to identify which of the above 3
quirk combinations (for a total of 8) match what the hardware documentation
describes. Then they should wrap the packing() function, creating a new
xxx_packing() that calls it using the proper QUIRK_* one-hot bits set.
The packing() function returns an int-encoded error code, which protects the
programmer against incorrect API use. The errors are not expected to occur
durring runtime, therefore it is reasonable for xxx_packing() to return void
and simply swallow those errors. Optionally it can dump stack or print the
error description.
......@@ -11120,6 +11120,12 @@ S: Maintained
F: Documentation/devicetree/bindings/sound/sgtl5000.txt
F: sound/soc/codecs/sgtl5000*
NXP SJA1105 ETHERNET SWITCH DRIVER
M: Vladimir Oltean <olteanv@gmail.com>
L: linux-kernel@vger.kernel.org
S: Maintained
F: drivers/net/dsa/sja1105
NXP TDA998X DRM DRIVER
M: Russell King <linux@armlinux.org.uk>
S: Maintained
......@@ -11673,6 +11679,14 @@ L: linux-i2c@vger.kernel.org
S: Orphan
F: drivers/i2c/busses/i2c-pasemi.c
PACKING
M: Vladimir Oltean <olteanv@gmail.com>
L: netdev@vger.kernel.org
S: Supported
F: lib/packing.c
F: include/linux/packing.h
F: Documentation/packing.txt
PADATA PARALLEL EXECUTION MECHANISM
M: Steffen Klassert <steffen.klassert@secunet.com>
L: linux-crypto@vger.kernel.org
......
......@@ -51,6 +51,8 @@ source "drivers/net/dsa/microchip/Kconfig"
source "drivers/net/dsa/mv88e6xxx/Kconfig"
source "drivers/net/dsa/sja1105/Kconfig"
config NET_DSA_QCA8K
tristate "Qualcomm Atheros QCA8K Ethernet switch family support"
depends on NET_DSA
......
......@@ -18,3 +18,4 @@ obj-$(CONFIG_NET_DSA_VITESSE_VSC73XX) += vitesse-vsc73xx.o
obj-y += b53/
obj-y += microchip/
obj-y += mv88e6xxx/
obj-y += sja1105/
config NET_DSA_SJA1105
tristate "NXP SJA1105 Ethernet switch family support"
depends on NET_DSA && SPI
select PACKING
select CRC32
help
This is the driver for the NXP SJA1105 automotive Ethernet switch
family. These are 5-port devices and are managed over an SPI
interface. Probing is handled based on OF bindings and so is the
linkage to phylib. The driver supports the following revisions:
- SJA1105E (Gen. 1, No TT-Ethernet)
- SJA1105T (Gen. 1, TT-Ethernet)
- SJA1105P (Gen. 2, No SGMII, No TT-Ethernet)
- SJA1105Q (Gen. 2, No SGMII, TT-Ethernet)
- SJA1105R (Gen. 2, SGMII, No TT-Ethernet)
- SJA1105S (Gen. 2, SGMII, TT-Ethernet)
obj-$(CONFIG_NET_DSA_SJA1105) += sja1105.o
sja1105-objs := \
sja1105_spi.o \
sja1105_main.o \
sja1105_ethtool.o \
sja1105_clocking.o \
sja1105_static_config.o \
sja1105_dynamic_config.o \
/* SPDX-License-Identifier: GPL-2.0
* Copyright (c) 2018, Sensor-Technik Wiedemann GmbH
* Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
*/
#ifndef _SJA1105_H
#define _SJA1105_H
#include <linux/dsa/sja1105.h>
#include <net/dsa.h>
#include "sja1105_static_config.h"
#define SJA1105_NUM_PORTS 5
#define SJA1105_NUM_TC 8
#define SJA1105ET_FDB_BIN_SIZE 4
/* The hardware value is in multiples of 10 ms.
* The passed parameter is in multiples of 1 ms.
*/
#define SJA1105_AGEING_TIME_MS(ms) ((ms) / 10)
/* Keeps the different addresses between E/T and P/Q/R/S */
struct sja1105_regs {
u64 device_id;
u64 prod_id;
u64 status;
u64 port_control;
u64 rgu;
u64 config;
u64 rmii_pll1;
u64 pad_mii_tx[SJA1105_NUM_PORTS];
u64 cgu_idiv[SJA1105_NUM_PORTS];
u64 rgmii_pad_mii_tx[SJA1105_NUM_PORTS];
u64 mii_tx_clk[SJA1105_NUM_PORTS];
u64 mii_rx_clk[SJA1105_NUM_PORTS];
u64 mii_ext_tx_clk[SJA1105_NUM_PORTS];
u64 mii_ext_rx_clk[SJA1105_NUM_PORTS];
u64 rgmii_tx_clk[SJA1105_NUM_PORTS];
u64 rmii_ref_clk[SJA1105_NUM_PORTS];
u64 rmii_ext_tx_clk[SJA1105_NUM_PORTS];
u64 mac[SJA1105_NUM_PORTS];
u64 mac_hl1[SJA1105_NUM_PORTS];
u64 mac_hl2[SJA1105_NUM_PORTS];
u64 qlevel[SJA1105_NUM_PORTS];
};
struct sja1105_info {
u64 device_id;
/* Needed for distinction between P and R, and between Q and S
* (since the parts with/without SGMII share the same
* switch core and device_id)
*/
u64 part_no;
const struct sja1105_dynamic_table_ops *dyn_ops;
const struct sja1105_table_ops *static_ops;
const struct sja1105_regs *regs;
int (*reset_cmd)(const void *ctx, const void *data);
int (*setup_rgmii_delay)(const void *ctx, int port);
const char *name;
};
struct sja1105_private {
struct sja1105_static_config static_config;
bool rgmii_rx_delay[SJA1105_NUM_PORTS];
bool rgmii_tx_delay[SJA1105_NUM_PORTS];
const struct sja1105_info *info;
struct gpio_desc *reset_gpio;
struct spi_device *spidev;
struct dsa_switch *ds;
};
#include "sja1105_dynamic_config.h"
struct sja1105_spi_message {
u64 access;
u64 read_count;
u64 address;
};
typedef enum {
SPI_READ = 0,
SPI_WRITE = 1,
} sja1105_spi_rw_mode_t;
/* From sja1105_spi.c */
int sja1105_spi_send_packed_buf(const struct sja1105_private *priv,
sja1105_spi_rw_mode_t rw, u64 reg_addr,
void *packed_buf, size_t size_bytes);
int sja1105_spi_send_int(const struct sja1105_private *priv,
sja1105_spi_rw_mode_t rw, u64 reg_addr,
u64 *value, u64 size_bytes);
int sja1105_spi_send_long_packed_buf(const struct sja1105_private *priv,
sja1105_spi_rw_mode_t rw, u64 base_addr,
void *packed_buf, u64 buf_len);
int sja1105_static_config_upload(struct sja1105_private *priv);
extern struct sja1105_info sja1105e_info;
extern struct sja1105_info sja1105t_info;
extern struct sja1105_info sja1105p_info;
extern struct sja1105_info sja1105q_info;
extern struct sja1105_info sja1105r_info;
extern struct sja1105_info sja1105s_info;
/* From sja1105_clocking.c */
typedef enum {
XMII_MAC = 0,
XMII_PHY = 1,
} sja1105_mii_role_t;
typedef enum {
XMII_MODE_MII = 0,
XMII_MODE_RMII = 1,
XMII_MODE_RGMII = 2,
} sja1105_phy_interface_t;
typedef enum {
SJA1105_SPEED_10MBPS = 3,
SJA1105_SPEED_100MBPS = 2,
SJA1105_SPEED_1000MBPS = 1,
SJA1105_SPEED_AUTO = 0,
} sja1105_speed_t;
int sja1105_clocking_setup_port(struct sja1105_private *priv, int port);
int sja1105_clocking_setup(struct sja1105_private *priv);
/* From sja1105_ethtool.c */
void sja1105_get_ethtool_stats(struct dsa_switch *ds, int port, u64 *data);
void sja1105_get_strings(struct dsa_switch *ds, int port,
u32 stringset, u8 *data);
int sja1105_get_sset_count(struct dsa_switch *ds, int port, int sset);
/* From sja1105_dynamic_config.c */
int sja1105_dynamic_config_read(struct sja1105_private *priv,
enum sja1105_blk_idx blk_idx,
int index, void *entry);
int sja1105_dynamic_config_write(struct sja1105_private *priv,
enum sja1105_blk_idx blk_idx,
int index, void *entry, bool keep);
u8 sja1105_fdb_hash(struct sja1105_private *priv, const u8 *addr, u16 vid);
/* Common implementations for the static and dynamic configs */
size_t sja1105_l2_forwarding_entry_packing(void *buf, void *entry_ptr,
enum packing_op op);
size_t sja1105pqrs_l2_lookup_entry_packing(void *buf, void *entry_ptr,
enum packing_op op);
size_t sja1105et_l2_lookup_entry_packing(void *buf, void *entry_ptr,
enum packing_op op);
size_t sja1105_vlan_lookup_entry_packing(void *buf, void *entry_ptr,
enum packing_op op);
size_t sja1105pqrs_mac_config_entry_packing(void *buf, void *entry_ptr,
enum packing_op op);
#endif
// SPDX-License-Identifier: BSD-3-Clause
/* Copyright (c) 2016-2018, NXP Semiconductors
* Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
*/
#include <linux/packing.h>
#include "sja1105.h"
#define SJA1105_SIZE_CGU_CMD 4
struct sja1105_cfg_pad_mii_tx {
u64 d32_os;
u64 d32_ipud;
u64 d10_os;
u64 d10_ipud;
u64 ctrl_os;
u64 ctrl_ipud;
u64 clk_os;
u64 clk_ih;
u64 clk_ipud;
};
/* UM10944 Table 82.
* IDIV_0_C to IDIV_4_C control registers
* (addr. 10000Bh to 10000Fh)
*/
struct sja1105_cgu_idiv {
u64 clksrc;
u64 autoblock;
u64 idiv;
u64 pd;
};
/* PLL_1_C control register
*
* SJA1105 E/T: UM10944 Table 81 (address 10000Ah)
* SJA1105 P/Q/R/S: UM11040 Table 116 (address 10000Ah)
*/
struct sja1105_cgu_pll_ctrl {
u64 pllclksrc;
u64 msel;
u64 autoblock;
u64 psel;
u64 direct;
u64 fbsel;
u64 bypass;
u64 pd;
};
enum {
CLKSRC_MII0_TX_CLK = 0x00,
CLKSRC_MII0_RX_CLK = 0x01,
CLKSRC_MII1_TX_CLK = 0x02,
CLKSRC_MII1_RX_CLK = 0x03,
CLKSRC_MII2_TX_CLK = 0x04,
CLKSRC_MII2_RX_CLK = 0x05,
CLKSRC_MII3_TX_CLK = 0x06,
CLKSRC_MII3_RX_CLK = 0x07,
CLKSRC_MII4_TX_CLK = 0x08,
CLKSRC_MII4_RX_CLK = 0x09,
CLKSRC_PLL0 = 0x0B,
CLKSRC_PLL1 = 0x0E,
CLKSRC_IDIV0 = 0x11,
CLKSRC_IDIV1 = 0x12,
CLKSRC_IDIV2 = 0x13,
CLKSRC_IDIV3 = 0x14,
CLKSRC_IDIV4 = 0x15,
};
/* UM10944 Table 83.
* MIIx clock control registers 1 to 30
* (addresses 100013h to 100035h)
*/
struct sja1105_cgu_mii_ctrl {
u64 clksrc;
u64 autoblock;
u64 pd;
};
static void sja1105_cgu_idiv_packing(void *buf, struct sja1105_cgu_idiv *idiv,
enum packing_op op)
{
const int size = 4;
sja1105_packing(buf, &idiv->clksrc, 28, 24, size, op);
sja1105_packing(buf, &idiv->autoblock, 11, 11, size, op);
sja1105_packing(buf, &idiv->idiv, 5, 2, size, op);
sja1105_packing(buf, &idiv->pd, 0, 0, size, op);
}
static int sja1105_cgu_idiv_config(struct sja1105_private *priv, int port,
bool enabled, int factor)
{
const struct sja1105_regs *regs = priv->info->regs;
struct device *dev = priv->ds->dev;
struct sja1105_cgu_idiv idiv;
u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0};
if (enabled && factor != 1 && factor != 10) {
dev_err(dev, "idiv factor must be 1 or 10\n");
return -ERANGE;
}
/* Payload for packed_buf */
idiv.clksrc = 0x0A; /* 25MHz */
idiv.autoblock = 1; /* Block clk automatically */
idiv.idiv = factor - 1; /* Divide by 1 or 10 */
idiv.pd = enabled ? 0 : 1; /* Power down? */
sja1105_cgu_idiv_packing(packed_buf, &idiv, PACK);
return sja1105_spi_send_packed_buf(priv, SPI_WRITE,
regs->cgu_idiv[port], packed_buf,
SJA1105_SIZE_CGU_CMD);
}
static void
sja1105_cgu_mii_control_packing(void *buf, struct sja1105_cgu_mii_ctrl *cmd,
enum packing_op op)
{
const int size = 4;
sja1105_packing(buf, &cmd->clksrc, 28, 24, size, op);
sja1105_packing(buf, &cmd->autoblock, 11, 11, size, op);
sja1105_packing(buf, &cmd->pd, 0, 0, size, op);
}
static int sja1105_cgu_mii_tx_clk_config(struct sja1105_private *priv,
int port, sja1105_mii_role_t role)
{
const struct sja1105_regs *regs = priv->info->regs;
struct sja1105_cgu_mii_ctrl mii_tx_clk;
const int mac_clk_sources[] = {
CLKSRC_MII0_TX_CLK,
CLKSRC_MII1_TX_CLK,
CLKSRC_MII2_TX_CLK,
CLKSRC_MII3_TX_CLK,
CLKSRC_MII4_TX_CLK,
};
const int phy_clk_sources[] = {
CLKSRC_IDIV0,
CLKSRC_IDIV1,
CLKSRC_IDIV2,
CLKSRC_IDIV3,
CLKSRC_IDIV4,
};
u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0};
int clksrc;
if (role == XMII_MAC)
clksrc = mac_clk_sources[port];
else
clksrc = phy_clk_sources[port];
/* Payload for packed_buf */
mii_tx_clk.clksrc = clksrc;
mii_tx_clk.autoblock = 1; /* Autoblock clk while changing clksrc */
mii_tx_clk.pd = 0; /* Power Down off => enabled */
sja1105_cgu_mii_control_packing(packed_buf, &mii_tx_clk, PACK);
return sja1105_spi_send_packed_buf(priv, SPI_WRITE,
regs->mii_tx_clk[port], packed_buf,
SJA1105_SIZE_CGU_CMD);
}
static int
sja1105_cgu_mii_rx_clk_config(struct sja1105_private *priv, int port)
{
const struct sja1105_regs *regs = priv->info->regs;
struct sja1105_cgu_mii_ctrl mii_rx_clk;
u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0};
const int clk_sources[] = {
CLKSRC_MII0_RX_CLK,
CLKSRC_MII1_RX_CLK,
CLKSRC_MII2_RX_CLK,
CLKSRC_MII3_RX_CLK,
CLKSRC_MII4_RX_CLK,
};
/* Payload for packed_buf */
mii_rx_clk.clksrc = clk_sources[port];
mii_rx_clk.autoblock = 1; /* Autoblock clk while changing clksrc */
mii_rx_clk.pd = 0; /* Power Down off => enabled */
sja1105_cgu_mii_control_packing(packed_buf, &mii_rx_clk, PACK);
return sja1105_spi_send_packed_buf(priv, SPI_WRITE,
regs->mii_rx_clk[port], packed_buf,
SJA1105_SIZE_CGU_CMD);
}
static int
sja1105_cgu_mii_ext_tx_clk_config(struct sja1105_private *priv, int port)
{
const struct sja1105_regs *regs = priv->info->regs;
struct sja1105_cgu_mii_ctrl mii_ext_tx_clk;
u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0};
const int clk_sources[] = {
CLKSRC_IDIV0,
CLKSRC_IDIV1,
CLKSRC_IDIV2,
CLKSRC_IDIV3,
CLKSRC_IDIV4,
};
/* Payload for packed_buf */
mii_ext_tx_clk.clksrc = clk_sources[port];
mii_ext_tx_clk.autoblock = 1; /* Autoblock clk while changing clksrc */
mii_ext_tx_clk.pd = 0; /* Power Down off => enabled */
sja1105_cgu_mii_control_packing(packed_buf, &mii_ext_tx_clk, PACK);
return sja1105_spi_send_packed_buf(priv, SPI_WRITE,
regs->mii_ext_tx_clk[port],
packed_buf, SJA1105_SIZE_CGU_CMD);
}
static int
sja1105_cgu_mii_ext_rx_clk_config(struct sja1105_private *priv, int port)
{
const struct sja1105_regs *regs = priv->info->regs;
struct sja1105_cgu_mii_ctrl mii_ext_rx_clk;
u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0};
const int clk_sources[] = {
CLKSRC_IDIV0,
CLKSRC_IDIV1,
CLKSRC_IDIV2,
CLKSRC_IDIV3,
CLKSRC_IDIV4,
};
/* Payload for packed_buf */
mii_ext_rx_clk.clksrc = clk_sources[port];
mii_ext_rx_clk.autoblock = 1; /* Autoblock clk while changing clksrc */
mii_ext_rx_clk.pd = 0; /* Power Down off => enabled */
sja1105_cgu_mii_control_packing(packed_buf, &mii_ext_rx_clk, PACK);
return sja1105_spi_send_packed_buf(priv, SPI_WRITE,
regs->mii_ext_rx_clk[port],
packed_buf, SJA1105_SIZE_CGU_CMD);
}
static int sja1105_mii_clocking_setup(struct sja1105_private *priv, int port,
sja1105_mii_role_t role)
{
struct device *dev = priv->ds->dev;
int rc;
dev_dbg(dev, "Configuring MII-%s clocking\n",
(role == XMII_MAC) ? "MAC" : "PHY");
/* If role is MAC, disable IDIV
* If role is PHY, enable IDIV and configure for 1/1 divider
*/
rc = sja1105_cgu_idiv_config(priv, port, (role == XMII_PHY), 1);
if (rc < 0)
return rc;
/* Configure CLKSRC of MII_TX_CLK_n
* * If role is MAC, select TX_CLK_n
* * If role is PHY, select IDIV_n
*/
rc = sja1105_cgu_mii_tx_clk_config(priv, port, role);
if (rc < 0)
return rc;
/* Configure CLKSRC of MII_RX_CLK_n
* Select RX_CLK_n
*/
rc = sja1105_cgu_mii_rx_clk_config(priv, port);
if (rc < 0)
return rc;
if (role == XMII_PHY) {
/* Per MII spec, the PHY (which is us) drives the TX_CLK pin */
/* Configure CLKSRC of EXT_TX_CLK_n
* Select IDIV_n
*/
rc = sja1105_cgu_mii_ext_tx_clk_config(priv, port);
if (rc < 0)
return rc;
/* Configure CLKSRC of EXT_RX_CLK_n
* Select IDIV_n
*/
rc = sja1105_cgu_mii_ext_rx_clk_config(priv, port);
if (rc < 0)
return rc;
}
return 0;
}
static void
sja1105_cgu_pll_control_packing(void *buf, struct sja1105_cgu_pll_ctrl *cmd,
enum packing_op op)
{
const int size = 4;
sja1105_packing(buf, &cmd->pllclksrc, 28, 24, size, op);
sja1105_packing(buf, &cmd->msel, 23, 16, size, op);
sja1105_packing(buf, &cmd->autoblock, 11, 11, size, op);
sja1105_packing(buf, &cmd->psel, 9, 8, size, op);
sja1105_packing(buf, &cmd->direct, 7, 7, size, op);
sja1105_packing(buf, &cmd->fbsel, 6, 6, size, op);
sja1105_packing(buf, &cmd->bypass, 1, 1, size, op);
sja1105_packing(buf, &cmd->pd, 0, 0, size, op);
}
static int sja1105_cgu_rgmii_tx_clk_config(struct sja1105_private *priv,
int port, sja1105_speed_t speed)
{
const struct sja1105_regs *regs = priv->info->regs;
struct sja1105_cgu_mii_ctrl txc;
u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0};
int clksrc;
if (speed == SJA1105_SPEED_1000MBPS) {
clksrc = CLKSRC_PLL0;
} else {
int clk_sources[] = {CLKSRC_IDIV0, CLKSRC_IDIV1, CLKSRC_IDIV2,
CLKSRC_IDIV3, CLKSRC_IDIV4};
clksrc = clk_sources[port];
}
/* RGMII: 125MHz for 1000, 25MHz for 100, 2.5MHz for 10 */
txc.clksrc = clksrc;
/* Autoblock clk while changing clksrc */
txc.autoblock = 1;
/* Power Down off => enabled */
txc.pd = 0;
sja1105_cgu_mii_control_packing(packed_buf, &txc, PACK);
return sja1105_spi_send_packed_buf(priv, SPI_WRITE,
regs->rgmii_tx_clk[port],
packed_buf, SJA1105_SIZE_CGU_CMD);
}
/* AGU */
static void
sja1105_cfg_pad_mii_tx_packing(void *buf, struct sja1105_cfg_pad_mii_tx *cmd,
enum packing_op op)
{
const int size = 4;
sja1105_packing(buf, &cmd->d32_os, 28, 27, size, op);
sja1105_packing(buf, &cmd->d32_ipud, 25, 24, size, op);
sja1105_packing(buf, &cmd->d10_os, 20, 19, size, op);
sja1105_packing(buf, &cmd->d10_ipud, 17, 16, size, op);
sja1105_packing(buf, &cmd->ctrl_os, 12, 11, size, op);
sja1105_packing(buf, &cmd->ctrl_ipud, 9, 8, size, op);
sja1105_packing(buf, &cmd->clk_os, 4, 3, size, op);
sja1105_packing(buf, &cmd->clk_ih, 2, 2, size, op);
sja1105_packing(buf, &cmd->clk_ipud, 1, 0, size, op);
}
static int sja1105_rgmii_cfg_pad_tx_config(struct sja1105_private *priv,
int port)
{
const struct sja1105_regs *regs = priv->info->regs;
struct sja1105_cfg_pad_mii_tx pad_mii_tx;
u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0};
/* Payload */
pad_mii_tx.d32_os = 3; /* TXD[3:2] output stage: */
/* high noise/high speed */
pad_mii_tx.d10_os = 3; /* TXD[1:0] output stage: */
/* high noise/high speed */
pad_mii_tx.d32_ipud = 2; /* TXD[3:2] input stage: */
/* plain input (default) */
pad_mii_tx.d10_ipud = 2; /* TXD[1:0] input stage: */
/* plain input (default) */
pad_mii_tx.ctrl_os = 3; /* TX_CTL / TX_ER output stage */
pad_mii_tx.ctrl_ipud = 2; /* TX_CTL / TX_ER input stage (default) */
pad_mii_tx.clk_os = 3; /* TX_CLK output stage */
pad_mii_tx.clk_ih = 0; /* TX_CLK input hysteresis (default) */
pad_mii_tx.clk_ipud = 2; /* TX_CLK input stage (default) */
sja1105_cfg_pad_mii_tx_packing(packed_buf, &pad_mii_tx, PACK);
return sja1105_spi_send_packed_buf(priv, SPI_WRITE,
regs->rgmii_pad_mii_tx[port],
packed_buf, SJA1105_SIZE_CGU_CMD);
}
static int sja1105_rgmii_clocking_setup(struct sja1105_private *priv, int port)
{
struct device *dev = priv->ds->dev;
struct sja1105_mac_config_entry *mac;
sja1105_speed_t speed;
int rc;
mac = priv->static_config.tables[BLK_IDX_MAC_CONFIG].entries;
speed = mac[port].speed;
dev_dbg(dev, "Configuring port %d RGMII at speed %dMbps\n",
port, speed);
switch (speed) {
case SJA1105_SPEED_1000MBPS:
/* 1000Mbps, IDIV disabled (125 MHz) */
rc = sja1105_cgu_idiv_config(priv, port, false, 1);
break;
case SJA1105_SPEED_100MBPS:
/* 100Mbps, IDIV enabled, divide by 1 (25 MHz) */
rc = sja1105_cgu_idiv_config(priv, port, true, 1);
break;
case SJA1105_SPEED_10MBPS:
/* 10Mbps, IDIV enabled, divide by 10 (2.5 MHz) */
rc = sja1105_cgu_idiv_config(priv, port, true, 10);
break;
case SJA1105_SPEED_AUTO:
/* Skip CGU configuration if there is no speed available
* (e.g. link is not established yet)
*/
dev_dbg(dev, "Speed not available, skipping CGU config\n");
return 0;
default:
rc = -EINVAL;
}
if (rc < 0) {
dev_err(dev, "Failed to configure idiv\n");
return rc;
}
rc = sja1105_cgu_rgmii_tx_clk_config(priv, port, speed);
if (rc < 0) {
dev_err(dev, "Failed to configure RGMII Tx clock\n");
return rc;
}
rc = sja1105_rgmii_cfg_pad_tx_config(priv, port);
if (rc < 0) {
dev_err(dev, "Failed to configure Tx pad registers\n");
return rc;
}
if (!priv->info->setup_rgmii_delay)
return 0;
return priv->info->setup_rgmii_delay(priv, port);
}
static int sja1105_cgu_rmii_ref_clk_config(struct sja1105_private *priv,
int port)
{
const struct sja1105_regs *regs = priv->info->regs;
struct sja1105_cgu_mii_ctrl ref_clk;
u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0};
const int clk_sources[] = {
CLKSRC_MII0_TX_CLK,
CLKSRC_MII1_TX_CLK,
CLKSRC_MII2_TX_CLK,
CLKSRC_MII3_TX_CLK,
CLKSRC_MII4_TX_CLK,
};
/* Payload for packed_buf */
ref_clk.clksrc = clk_sources[port];
ref_clk.autoblock = 1; /* Autoblock clk while changing clksrc */
ref_clk.pd = 0; /* Power Down off => enabled */
sja1105_cgu_mii_control_packing(packed_buf, &ref_clk, PACK);
return sja1105_spi_send_packed_buf(priv, SPI_WRITE,
regs->rmii_ref_clk[port],
packed_buf, SJA1105_SIZE_CGU_CMD);
}
static int
sja1105_cgu_rmii_ext_tx_clk_config(struct sja1105_private *priv, int port)
{
const struct sja1105_regs *regs = priv->info->regs;
struct sja1105_cgu_mii_ctrl ext_tx_clk;
u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0};
/* Payload for packed_buf */
ext_tx_clk.clksrc = CLKSRC_PLL1;
ext_tx_clk.autoblock = 1; /* Autoblock clk while changing clksrc */
ext_tx_clk.pd = 0; /* Power Down off => enabled */
sja1105_cgu_mii_control_packing(packed_buf, &ext_tx_clk, PACK);
return sja1105_spi_send_packed_buf(priv, SPI_WRITE,
regs->rmii_ext_tx_clk[port],
packed_buf, SJA1105_SIZE_CGU_CMD);
}
static int sja1105_cgu_rmii_pll_config(struct sja1105_private *priv)
{
const struct sja1105_regs *regs = priv->info->regs;
u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0};
struct sja1105_cgu_pll_ctrl pll = {0};
struct device *dev = priv->ds->dev;
int rc;
/* PLL1 must be enabled and output 50 Mhz.
* This is done by writing first 0x0A010941 to
* the PLL_1_C register and then deasserting
* power down (PD) 0x0A010940.
*/
/* Step 1: PLL1 setup for 50Mhz */
pll.pllclksrc = 0xA;
pll.msel = 0x1;
pll.autoblock = 0x1;
pll.psel = 0x1;
pll.direct = 0x0;
pll.fbsel = 0x1;
pll.bypass = 0x0;
pll.pd = 0x1;
sja1105_cgu_pll_control_packing(packed_buf, &pll, PACK);
rc = sja1105_spi_send_packed_buf(priv, SPI_WRITE, regs->rmii_pll1,
packed_buf, SJA1105_SIZE_CGU_CMD);
if (rc < 0) {
dev_err(dev, "failed to configure PLL1 for 50MHz\n");
return rc;
}
/* Step 2: Enable PLL1 */
pll.pd = 0x0;
sja1105_cgu_pll_control_packing(packed_buf, &pll, PACK);
rc = sja1105_spi_send_packed_buf(priv, SPI_WRITE, regs->rmii_pll1,
packed_buf, SJA1105_SIZE_CGU_CMD);
if (rc < 0) {
dev_err(dev, "failed to enable PLL1\n");
return rc;
}
return rc;
}
static int sja1105_rmii_clocking_setup(struct sja1105_private *priv, int port,
sja1105_mii_role_t role)
{
struct device *dev = priv->ds->dev;
int rc;
dev_dbg(dev, "Configuring RMII-%s clocking\n",
(role == XMII_MAC) ? "MAC" : "PHY");
/* AH1601.pdf chapter 2.5.1. Sources */
if (role == XMII_MAC) {
/* Configure and enable PLL1 for 50Mhz output */
rc = sja1105_cgu_rmii_pll_config(priv);
if (rc < 0)
return rc;
}
/* Disable IDIV for this port */
rc = sja1105_cgu_idiv_config(priv, port, false, 1);
if (rc < 0)
return rc;
/* Source to sink mappings */
rc = sja1105_cgu_rmii_ref_clk_config(priv, port);
if (rc < 0)
return rc;
if (role == XMII_MAC) {
rc = sja1105_cgu_rmii_ext_tx_clk_config(priv, port);
if (rc < 0)
return rc;
}
return 0;
}
int sja1105_clocking_setup_port(struct sja1105_private *priv, int port)
{
struct sja1105_xmii_params_entry *mii;
struct device *dev = priv->ds->dev;
sja1105_phy_interface_t phy_mode;
sja1105_mii_role_t role;
int rc;
mii = priv->static_config.tables[BLK_IDX_XMII_PARAMS].entries;
/* RGMII etc */
phy_mode = mii->xmii_mode[port];
/* MAC or PHY, for applicable types (not RGMII) */
role = mii->phy_mac[port];
switch (phy_mode) {
case XMII_MODE_MII:
rc = sja1105_mii_clocking_setup(priv, port, role);
break;
case XMII_MODE_RMII:
rc = sja1105_rmii_clocking_setup(priv, port, role);
break;
case XMII_MODE_RGMII:
rc = sja1105_rgmii_clocking_setup(priv, port);
break;
default:
dev_err(dev, "Invalid interface mode specified: %d\n",
phy_mode);
return -EINVAL;
}
if (rc)
dev_err(dev, "Clocking setup for port %d failed: %d\n",
port, rc);
return rc;
}
int sja1105_clocking_setup(struct sja1105_private *priv)
{
int port, rc;
for (port = 0; port < SJA1105_NUM_PORTS; port++) {
rc = sja1105_clocking_setup_port(priv, port);
if (rc < 0)
return rc;
}
return 0;
}
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
*/
#include "sja1105.h"
#define SJA1105_SIZE_DYN_CMD 4
#define SJA1105ET_SIZE_MAC_CONFIG_DYN_ENTRY \
SJA1105_SIZE_DYN_CMD
#define SJA1105ET_SIZE_L2_LOOKUP_DYN_CMD \
(SJA1105_SIZE_DYN_CMD + SJA1105ET_SIZE_L2_LOOKUP_ENTRY)
#define SJA1105PQRS_SIZE_L2_LOOKUP_DYN_CMD \
(SJA1105_SIZE_DYN_CMD + SJA1105PQRS_SIZE_L2_LOOKUP_ENTRY)
#define SJA1105_SIZE_VLAN_LOOKUP_DYN_CMD \
(SJA1105_SIZE_DYN_CMD + 4 + SJA1105_SIZE_VLAN_LOOKUP_ENTRY)
#define SJA1105_SIZE_L2_FORWARDING_DYN_CMD \
(SJA1105_SIZE_DYN_CMD + SJA1105_SIZE_L2_FORWARDING_ENTRY)
#define SJA1105ET_SIZE_MAC_CONFIG_DYN_CMD \
(SJA1105_SIZE_DYN_CMD + SJA1105ET_SIZE_MAC_CONFIG_DYN_ENTRY)
#define SJA1105PQRS_SIZE_MAC_CONFIG_DYN_CMD \
(SJA1105_SIZE_DYN_CMD + SJA1105PQRS_SIZE_MAC_CONFIG_ENTRY)
#define SJA1105ET_SIZE_L2_LOOKUP_PARAMS_DYN_CMD \
SJA1105_SIZE_DYN_CMD
#define SJA1105ET_SIZE_GENERAL_PARAMS_DYN_CMD \
SJA1105_SIZE_DYN_CMD
#define SJA1105_MAX_DYN_CMD_SIZE \
SJA1105PQRS_SIZE_MAC_CONFIG_DYN_CMD
static void
sja1105pqrs_l2_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
enum packing_op op)
{
u8 *p = buf + SJA1105PQRS_SIZE_L2_LOOKUP_ENTRY;
const int size = SJA1105_SIZE_DYN_CMD;
sja1105_packing(p, &cmd->valid, 31, 31, size, op);
sja1105_packing(p, &cmd->rdwrset, 30, 30, size, op);
sja1105_packing(p, &cmd->errors, 29, 29, size, op);
sja1105_packing(p, &cmd->valident, 27, 27, size, op);
/* Hack - The hardware takes the 'index' field within
* struct sja1105_l2_lookup_entry as the index on which this command
* will operate. However it will ignore everything else, so 'index'
* is logically part of command but physically part of entry.
* Populate the 'index' entry field from within the command callback,
* such that our API doesn't need to ask for a full-blown entry
* structure when e.g. a delete is requested.
*/
sja1105_packing(buf, &cmd->index, 29, 20,
SJA1105PQRS_SIZE_L2_LOOKUP_ENTRY, op);
/* TODO hostcmd */
}
static void
sja1105et_l2_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
enum packing_op op)
{
u8 *p = buf + SJA1105ET_SIZE_L2_LOOKUP_ENTRY;
const int size = SJA1105_SIZE_DYN_CMD;
sja1105_packing(p, &cmd->valid, 31, 31, size, op);
sja1105_packing(p, &cmd->rdwrset, 30, 30, size, op);
sja1105_packing(p, &cmd->errors, 29, 29, size, op);
sja1105_packing(p, &cmd->valident, 27, 27, size, op);
/* Hack - see comments above. */
sja1105_packing(buf, &cmd->index, 29, 20,
SJA1105ET_SIZE_L2_LOOKUP_ENTRY, op);
}
static void
sja1105et_mgmt_route_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
enum packing_op op)
{
u8 *p = buf + SJA1105ET_SIZE_L2_LOOKUP_ENTRY;
u64 mgmtroute = 1;
sja1105et_l2_lookup_cmd_packing(buf, cmd, op);
if (op == PACK)
sja1105_pack(p, &mgmtroute, 26, 26, SJA1105_SIZE_DYN_CMD);
}
static size_t sja1105et_mgmt_route_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
struct sja1105_mgmt_entry *entry = entry_ptr;
const size_t size = SJA1105ET_SIZE_L2_LOOKUP_ENTRY;
/* UM10944: To specify if a PTP egress timestamp shall be captured on
* each port upon transmission of the frame, the LSB of VLANID in the
* ENTRY field provided by the host must be set.
* Bit 1 of VLANID then specifies the register where the timestamp for
* this port is stored in.
*/
sja1105_packing(buf, &entry->tsreg, 85, 85, size, op);
sja1105_packing(buf, &entry->takets, 84, 84, size, op);
sja1105_packing(buf, &entry->macaddr, 83, 36, size, op);
sja1105_packing(buf, &entry->destports, 35, 31, size, op);
sja1105_packing(buf, &entry->enfport, 30, 30, size, op);
return size;
}
/* In E/T, entry is at addresses 0x27-0x28. There is a 4 byte gap at 0x29,
* and command is at 0x2a. Similarly in P/Q/R/S there is a 1 register gap
* between entry (0x2d, 0x2e) and command (0x30).
*/
static void
sja1105_vlan_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
enum packing_op op)
{
u8 *p = buf + SJA1105_SIZE_VLAN_LOOKUP_ENTRY + 4;
const int size = SJA1105_SIZE_DYN_CMD;
sja1105_packing(p, &cmd->valid, 31, 31, size, op);
sja1105_packing(p, &cmd->rdwrset, 30, 30, size, op);
sja1105_packing(p, &cmd->valident, 27, 27, size, op);
/* Hack - see comments above, applied for 'vlanid' field of
* struct sja1105_vlan_lookup_entry.
*/
sja1105_packing(buf, &cmd->index, 38, 27,
SJA1105_SIZE_VLAN_LOOKUP_ENTRY, op);
}
static void
sja1105_l2_forwarding_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
enum packing_op op)
{
u8 *p = buf + SJA1105_SIZE_L2_FORWARDING_ENTRY;
const int size = SJA1105_SIZE_DYN_CMD;
sja1105_packing(p, &cmd->valid, 31, 31, size, op);
sja1105_packing(p, &cmd->errors, 30, 30, size, op);
sja1105_packing(p, &cmd->rdwrset, 29, 29, size, op);
sja1105_packing(p, &cmd->index, 4, 0, size, op);
}
static void
sja1105et_mac_config_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
enum packing_op op)
{
const int size = SJA1105_SIZE_DYN_CMD;
/* Yup, user manual definitions are reversed */
u8 *reg1 = buf + 4;
sja1105_packing(reg1, &cmd->valid, 31, 31, size, op);
sja1105_packing(reg1, &cmd->index, 26, 24, size, op);
}
static size_t sja1105et_mac_config_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
const int size = SJA1105ET_SIZE_MAC_CONFIG_DYN_ENTRY;
struct sja1105_mac_config_entry *entry = entry_ptr;
/* Yup, user manual definitions are reversed */
u8 *reg1 = buf + 4;
u8 *reg2 = buf;
sja1105_packing(reg1, &entry->speed, 30, 29, size, op);
sja1105_packing(reg1, &entry->drpdtag, 23, 23, size, op);
sja1105_packing(reg1, &entry->drpuntag, 22, 22, size, op);
sja1105_packing(reg1, &entry->retag, 21, 21, size, op);
sja1105_packing(reg1, &entry->dyn_learn, 20, 20, size, op);
sja1105_packing(reg1, &entry->egress, 19, 19, size, op);
sja1105_packing(reg1, &entry->ingress, 18, 18, size, op);
sja1105_packing(reg1, &entry->ing_mirr, 17, 17, size, op);
sja1105_packing(reg1, &entry->egr_mirr, 16, 16, size, op);
sja1105_packing(reg1, &entry->vlanprio, 14, 12, size, op);
sja1105_packing(reg1, &entry->vlanid, 11, 0, size, op);
sja1105_packing(reg2, &entry->tp_delin, 31, 16, size, op);
sja1105_packing(reg2, &entry->tp_delout, 15, 0, size, op);
/* MAC configuration table entries which can't be reconfigured:
* top, base, enabled, ifg, maxage, drpnona664
*/
/* Bogus return value, not used anywhere */
return 0;
}
static void
sja1105pqrs_mac_config_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
enum packing_op op)
{
const int size = SJA1105ET_SIZE_MAC_CONFIG_DYN_ENTRY;
u8 *p = buf + SJA1105PQRS_SIZE_MAC_CONFIG_ENTRY;
sja1105_packing(p, &cmd->valid, 31, 31, size, op);
sja1105_packing(p, &cmd->errors, 30, 30, size, op);
sja1105_packing(p, &cmd->rdwrset, 29, 29, size, op);
sja1105_packing(p, &cmd->index, 2, 0, size, op);
}
static void
sja1105et_l2_lookup_params_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
enum packing_op op)
{
sja1105_packing(buf, &cmd->valid, 31, 31,
SJA1105ET_SIZE_L2_LOOKUP_PARAMS_DYN_CMD, op);
}
static size_t
sja1105et_l2_lookup_params_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
struct sja1105_l2_lookup_params_entry *entry = entry_ptr;
sja1105_packing(buf, &entry->poly, 7, 0,
SJA1105ET_SIZE_L2_LOOKUP_PARAMS_DYN_CMD, op);
/* Bogus return value, not used anywhere */
return 0;
}
static void
sja1105et_general_params_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
enum packing_op op)
{
const int size = SJA1105ET_SIZE_GENERAL_PARAMS_DYN_CMD;
sja1105_packing(buf, &cmd->valid, 31, 31, size, op);
sja1105_packing(buf, &cmd->errors, 30, 30, size, op);
}
static size_t
sja1105et_general_params_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
struct sja1105_general_params_entry *entry = entry_ptr;
const int size = SJA1105ET_SIZE_GENERAL_PARAMS_DYN_CMD;
sja1105_packing(buf, &entry->mirr_port, 2, 0, size, op);
/* Bogus return value, not used anywhere */
return 0;
}
#define OP_READ BIT(0)
#define OP_WRITE BIT(1)
#define OP_DEL BIT(2)
/* SJA1105E/T: First generation */
struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = {
[BLK_IDX_L2_LOOKUP] = {
.entry_packing = sja1105et_l2_lookup_entry_packing,
.cmd_packing = sja1105et_l2_lookup_cmd_packing,
.access = (OP_READ | OP_WRITE | OP_DEL),
.max_entry_count = SJA1105_MAX_L2_LOOKUP_COUNT,
.packed_size = SJA1105ET_SIZE_L2_LOOKUP_DYN_CMD,
.addr = 0x20,
},
[BLK_IDX_MGMT_ROUTE] = {
.entry_packing = sja1105et_mgmt_route_entry_packing,
.cmd_packing = sja1105et_mgmt_route_cmd_packing,
.access = (OP_READ | OP_WRITE),
.max_entry_count = SJA1105_NUM_PORTS,
.packed_size = SJA1105ET_SIZE_L2_LOOKUP_DYN_CMD,
.addr = 0x20,
},
[BLK_IDX_L2_POLICING] = {0},
[BLK_IDX_VLAN_LOOKUP] = {
.entry_packing = sja1105_vlan_lookup_entry_packing,
.cmd_packing = sja1105_vlan_lookup_cmd_packing,
.access = (OP_WRITE | OP_DEL),
.max_entry_count = SJA1105_MAX_VLAN_LOOKUP_COUNT,
.packed_size = SJA1105_SIZE_VLAN_LOOKUP_DYN_CMD,
.addr = 0x27,
},
[BLK_IDX_L2_FORWARDING] = {
.entry_packing = sja1105_l2_forwarding_entry_packing,
.cmd_packing = sja1105_l2_forwarding_cmd_packing,
.max_entry_count = SJA1105_MAX_L2_FORWARDING_COUNT,
.access = OP_WRITE,
.packed_size = SJA1105_SIZE_L2_FORWARDING_DYN_CMD,
.addr = 0x24,
},
[BLK_IDX_MAC_CONFIG] = {
.entry_packing = sja1105et_mac_config_entry_packing,
.cmd_packing = sja1105et_mac_config_cmd_packing,
.max_entry_count = SJA1105_MAX_MAC_CONFIG_COUNT,
.access = OP_WRITE,
.packed_size = SJA1105ET_SIZE_MAC_CONFIG_DYN_CMD,
.addr = 0x36,
},
[BLK_IDX_L2_LOOKUP_PARAMS] = {
.entry_packing = sja1105et_l2_lookup_params_entry_packing,
.cmd_packing = sja1105et_l2_lookup_params_cmd_packing,
.max_entry_count = SJA1105_MAX_L2_LOOKUP_PARAMS_COUNT,
.access = OP_WRITE,
.packed_size = SJA1105ET_SIZE_L2_LOOKUP_PARAMS_DYN_CMD,
.addr = 0x38,
},
[BLK_IDX_L2_FORWARDING_PARAMS] = {0},
[BLK_IDX_GENERAL_PARAMS] = {
.entry_packing = sja1105et_general_params_entry_packing,
.cmd_packing = sja1105et_general_params_cmd_packing,
.max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT,
.access = OP_WRITE,
.packed_size = SJA1105ET_SIZE_GENERAL_PARAMS_DYN_CMD,
.addr = 0x34,
},
[BLK_IDX_XMII_PARAMS] = {0},
};
/* SJA1105P/Q/R/S: Second generation: TODO */
struct sja1105_dynamic_table_ops sja1105pqrs_dyn_ops[BLK_IDX_MAX_DYN] = {
[BLK_IDX_L2_LOOKUP] = {
.entry_packing = sja1105pqrs_l2_lookup_entry_packing,
.cmd_packing = sja1105pqrs_l2_lookup_cmd_packing,
.access = (OP_READ | OP_WRITE | OP_DEL),
.max_entry_count = SJA1105_MAX_L2_LOOKUP_COUNT,
.packed_size = SJA1105ET_SIZE_L2_LOOKUP_DYN_CMD,
.addr = 0x24,
},
[BLK_IDX_L2_POLICING] = {0},
[BLK_IDX_VLAN_LOOKUP] = {
.entry_packing = sja1105_vlan_lookup_entry_packing,
.cmd_packing = sja1105_vlan_lookup_cmd_packing,
.access = (OP_READ | OP_WRITE | OP_DEL),
.max_entry_count = SJA1105_MAX_VLAN_LOOKUP_COUNT,
.packed_size = SJA1105_SIZE_VLAN_LOOKUP_DYN_CMD,
.addr = 0x2D,
},
[BLK_IDX_L2_FORWARDING] = {
.entry_packing = sja1105_l2_forwarding_entry_packing,
.cmd_packing = sja1105_l2_forwarding_cmd_packing,
.max_entry_count = SJA1105_MAX_L2_FORWARDING_COUNT,
.access = OP_WRITE,
.packed_size = SJA1105_SIZE_L2_FORWARDING_DYN_CMD,
.addr = 0x2A,
},
[BLK_IDX_MAC_CONFIG] = {
.entry_packing = sja1105pqrs_mac_config_entry_packing,
.cmd_packing = sja1105pqrs_mac_config_cmd_packing,
.max_entry_count = SJA1105_MAX_MAC_CONFIG_COUNT,
.access = (OP_READ | OP_WRITE),
.packed_size = SJA1105PQRS_SIZE_MAC_CONFIG_DYN_CMD,
.addr = 0x4B,
},
[BLK_IDX_L2_LOOKUP_PARAMS] = {
.entry_packing = sja1105et_l2_lookup_params_entry_packing,
.cmd_packing = sja1105et_l2_lookup_params_cmd_packing,
.max_entry_count = SJA1105_MAX_L2_LOOKUP_PARAMS_COUNT,
.access = (OP_READ | OP_WRITE),
.packed_size = SJA1105ET_SIZE_L2_LOOKUP_PARAMS_DYN_CMD,
.addr = 0x38,
},
[BLK_IDX_L2_FORWARDING_PARAMS] = {0},
[BLK_IDX_GENERAL_PARAMS] = {
.entry_packing = sja1105et_general_params_entry_packing,
.cmd_packing = sja1105et_general_params_cmd_packing,
.max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT,
.access = OP_WRITE,
.packed_size = SJA1105ET_SIZE_GENERAL_PARAMS_DYN_CMD,
.addr = 0x34,
},
[BLK_IDX_XMII_PARAMS] = {0},
};
int sja1105_dynamic_config_read(struct sja1105_private *priv,
enum sja1105_blk_idx blk_idx,
int index, void *entry)
{
const struct sja1105_dynamic_table_ops *ops;
struct sja1105_dyn_cmd cmd = {0};
/* SPI payload buffer */
u8 packed_buf[SJA1105_MAX_DYN_CMD_SIZE] = {0};
int retries = 3;
int rc;
if (blk_idx >= BLK_IDX_MAX_DYN)
return -ERANGE;
ops = &priv->info->dyn_ops[blk_idx];
if (index >= ops->max_entry_count)
return -ERANGE;
if (!(ops->access & OP_READ))
return -EOPNOTSUPP;
if (ops->packed_size > SJA1105_MAX_DYN_CMD_SIZE)
return -ERANGE;
if (!ops->cmd_packing)
return -EOPNOTSUPP;
if (!ops->entry_packing)
return -EOPNOTSUPP;
cmd.valid = true; /* Trigger action on table entry */
cmd.rdwrset = SPI_READ; /* Action is read */
cmd.index = index;
ops->cmd_packing(packed_buf, &cmd, PACK);
/* Send SPI write operation: read config table entry */
rc = sja1105_spi_send_packed_buf(priv, SPI_WRITE, ops->addr,
packed_buf, ops->packed_size);
if (rc < 0)
return rc;
/* Loop until we have confirmation that hardware has finished
* processing the command and has cleared the VALID field
*/
do {
memset(packed_buf, 0, ops->packed_size);
/* Retrieve the read operation's result */
rc = sja1105_spi_send_packed_buf(priv, SPI_READ, ops->addr,
packed_buf, ops->packed_size);
if (rc < 0)
return rc;
cmd = (struct sja1105_dyn_cmd) {0};
ops->cmd_packing(packed_buf, &cmd, UNPACK);
/* UM10944: [valident] will always be found cleared
* during a read access with MGMTROUTE set.
* So don't error out in that case.
*/
if (!cmd.valident && blk_idx != BLK_IDX_MGMT_ROUTE)
return -EINVAL;
cpu_relax();
} while (cmd.valid && --retries);
if (cmd.valid)
return -ETIMEDOUT;
/* Don't dereference possibly NULL pointer - maybe caller
* only wanted to see whether the entry existed or not.
*/
if (entry)
ops->entry_packing(packed_buf, entry, UNPACK);
return 0;
}
int sja1105_dynamic_config_write(struct sja1105_private *priv,
enum sja1105_blk_idx blk_idx,
int index, void *entry, bool keep)
{
const struct sja1105_dynamic_table_ops *ops;
struct sja1105_dyn_cmd cmd = {0};
/* SPI payload buffer */
u8 packed_buf[SJA1105_MAX_DYN_CMD_SIZE] = {0};
int rc;
if (blk_idx >= BLK_IDX_MAX_DYN)
return -ERANGE;
ops = &priv->info->dyn_ops[blk_idx];
if (index >= ops->max_entry_count)
return -ERANGE;
if (!(ops->access & OP_WRITE))
return -EOPNOTSUPP;
if (!keep && !(ops->access & OP_DEL))
return -EOPNOTSUPP;
if (ops->packed_size > SJA1105_MAX_DYN_CMD_SIZE)
return -ERANGE;
cmd.valident = keep; /* If false, deletes entry */
cmd.valid = true; /* Trigger action on table entry */
cmd.rdwrset = SPI_WRITE; /* Action is write */
cmd.index = index;
if (!ops->cmd_packing)
return -EOPNOTSUPP;
ops->cmd_packing(packed_buf, &cmd, PACK);
if (!ops->entry_packing)
return -EOPNOTSUPP;
/* Don't dereference potentially NULL pointer if just
* deleting a table entry is what was requested. For cases
* where 'index' field is physically part of entry structure,
* and needed here, we deal with that in the cmd_packing callback.
*/
if (keep)
ops->entry_packing(packed_buf, entry, PACK);
/* Send SPI write operation: read config table entry */
rc = sja1105_spi_send_packed_buf(priv, SPI_WRITE, ops->addr,
packed_buf, ops->packed_size);
if (rc < 0)
return rc;
cmd = (struct sja1105_dyn_cmd) {0};
ops->cmd_packing(packed_buf, &cmd, UNPACK);
if (cmd.errors)
return -EINVAL;
return 0;
}
static u8 sja1105_crc8_add(u8 crc, u8 byte, u8 poly)
{
int i;
for (i = 0; i < 8; i++) {
if ((crc ^ byte) & (1 << 7)) {
crc <<= 1;
crc ^= poly;
} else {
crc <<= 1;
}
byte <<= 1;
}
return crc;
}
/* CRC8 algorithm with non-reversed input, non-reversed output,
* no input xor and no output xor. Code customized for receiving
* the SJA1105 E/T FDB keys (vlanid, macaddr) as input. CRC polynomial
* is also received as argument in the Koopman notation that the switch
* hardware stores it in.
*/
u8 sja1105_fdb_hash(struct sja1105_private *priv, const u8 *addr, u16 vid)
{
struct sja1105_l2_lookup_params_entry *l2_lookup_params =
priv->static_config.tables[BLK_IDX_L2_LOOKUP_PARAMS].entries;
u64 poly_koopman = l2_lookup_params->poly;
/* Convert polynomial from Koopman to 'normal' notation */
u8 poly = (u8)(1 + (poly_koopman << 1));
u64 vlanid = l2_lookup_params->shared_learn ? 0 : vid;
u64 input = (vlanid << 48) | ether_addr_to_u64(addr);
u8 crc = 0; /* seed */
int i;
/* Mask the eight bytes starting from MSB one at a time */
for (i = 56; i >= 0; i -= 8) {
u8 byte = (input & (0xffull << i)) >> i;
crc = sja1105_crc8_add(crc, byte, poly);
}
return crc;
}
/* SPDX-License-Identifier: GPL-2.0
* Copyright (c) 2019, Vladimir Oltean <olteanv@gmail.com>
*/
#ifndef _SJA1105_DYNAMIC_CONFIG_H
#define _SJA1105_DYNAMIC_CONFIG_H
#include "sja1105.h"
#include <linux/packing.h>
struct sja1105_dyn_cmd {
u64 valid;
u64 rdwrset;
u64 errors;
u64 valident;
u64 index;
};
struct sja1105_dynamic_table_ops {
/* This returns size_t just to keep same prototype as the
* static config ops, of which we are reusing some functions.
*/
size_t (*entry_packing)(void *buf, void *entry_ptr, enum packing_op op);
void (*cmd_packing)(void *buf, struct sja1105_dyn_cmd *cmd,
enum packing_op op);
size_t max_entry_count;
size_t packed_size;
u64 addr;
u8 access;
};
struct sja1105_mgmt_entry {
u64 tsreg;
u64 takets;
u64 macaddr;
u64 destports;
u64 enfport;
u64 index;
};
extern struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN];
extern struct sja1105_dynamic_table_ops sja1105pqrs_dyn_ops[BLK_IDX_MAX_DYN];
#endif
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
*/
#include "sja1105.h"
#define SJA1105_SIZE_MAC_AREA (0x02 * 4)
#define SJA1105_SIZE_HL1_AREA (0x10 * 4)
#define SJA1105_SIZE_HL2_AREA (0x4 * 4)
#define SJA1105_SIZE_QLEVEL_AREA (0x8 * 4) /* 0x4 to 0xB */
struct sja1105_port_status_mac {
u64 n_runt;
u64 n_soferr;
u64 n_alignerr;
u64 n_miierr;
u64 typeerr;
u64 sizeerr;
u64 tctimeout;
u64 priorerr;
u64 nomaster;
u64 memov;
u64 memerr;
u64 invtyp;
u64 intcyov;
u64 domerr;
u64 pcfbagdrop;
u64 spcprior;
u64 ageprior;
u64 portdrop;
u64 lendrop;
u64 bagdrop;
u64 policeerr;
u64 drpnona664err;
u64 spcerr;
u64 agedrp;
};
struct sja1105_port_status_hl1 {
u64 n_n664err;
u64 n_vlanerr;
u64 n_unreleased;
u64 n_sizeerr;
u64 n_crcerr;
u64 n_vlnotfound;
u64 n_ctpolerr;
u64 n_polerr;
u64 n_rxfrmsh;
u64 n_rxfrm;
u64 n_rxbytesh;
u64 n_rxbyte;
u64 n_txfrmsh;
u64 n_txfrm;
u64 n_txbytesh;
u64 n_txbyte;
};
struct sja1105_port_status_hl2 {
u64 n_qfull;
u64 n_part_drop;
u64 n_egr_disabled;
u64 n_not_reach;
u64 qlevel_hwm[8]; /* Only for P/Q/R/S */
u64 qlevel[8]; /* Only for P/Q/R/S */
};
struct sja1105_port_status {
struct sja1105_port_status_mac mac;
struct sja1105_port_status_hl1 hl1;
struct sja1105_port_status_hl2 hl2;
};
static void
sja1105_port_status_mac_unpack(void *buf,
struct sja1105_port_status_mac *status)
{
/* Make pointer arithmetic work on 4 bytes */
u32 *p = buf;
sja1105_unpack(p + 0x0, &status->n_runt, 31, 24, 4);
sja1105_unpack(p + 0x0, &status->n_soferr, 23, 16, 4);
sja1105_unpack(p + 0x0, &status->n_alignerr, 15, 8, 4);
sja1105_unpack(p + 0x0, &status->n_miierr, 7, 0, 4);
sja1105_unpack(p + 0x1, &status->typeerr, 27, 27, 4);
sja1105_unpack(p + 0x1, &status->sizeerr, 26, 26, 4);
sja1105_unpack(p + 0x1, &status->tctimeout, 25, 25, 4);
sja1105_unpack(p + 0x1, &status->priorerr, 24, 24, 4);
sja1105_unpack(p + 0x1, &status->nomaster, 23, 23, 4);
sja1105_unpack(p + 0x1, &status->memov, 22, 22, 4);
sja1105_unpack(p + 0x1, &status->memerr, 21, 21, 4);
sja1105_unpack(p + 0x1, &status->invtyp, 19, 19, 4);
sja1105_unpack(p + 0x1, &status->intcyov, 18, 18, 4);
sja1105_unpack(p + 0x1, &status->domerr, 17, 17, 4);
sja1105_unpack(p + 0x1, &status->pcfbagdrop, 16, 16, 4);
sja1105_unpack(p + 0x1, &status->spcprior, 15, 12, 4);
sja1105_unpack(p + 0x1, &status->ageprior, 11, 8, 4);
sja1105_unpack(p + 0x1, &status->portdrop, 6, 6, 4);
sja1105_unpack(p + 0x1, &status->lendrop, 5, 5, 4);
sja1105_unpack(p + 0x1, &status->bagdrop, 4, 4, 4);
sja1105_unpack(p + 0x1, &status->policeerr, 3, 3, 4);
sja1105_unpack(p + 0x1, &status->drpnona664err, 2, 2, 4);
sja1105_unpack(p + 0x1, &status->spcerr, 1, 1, 4);
sja1105_unpack(p + 0x1, &status->agedrp, 0, 0, 4);
}
static void
sja1105_port_status_hl1_unpack(void *buf,
struct sja1105_port_status_hl1 *status)
{
/* Make pointer arithmetic work on 4 bytes */
u32 *p = buf;
sja1105_unpack(p + 0xF, &status->n_n664err, 31, 0, 4);
sja1105_unpack(p + 0xE, &status->n_vlanerr, 31, 0, 4);
sja1105_unpack(p + 0xD, &status->n_unreleased, 31, 0, 4);
sja1105_unpack(p + 0xC, &status->n_sizeerr, 31, 0, 4);
sja1105_unpack(p + 0xB, &status->n_crcerr, 31, 0, 4);
sja1105_unpack(p + 0xA, &status->n_vlnotfound, 31, 0, 4);
sja1105_unpack(p + 0x9, &status->n_ctpolerr, 31, 0, 4);
sja1105_unpack(p + 0x8, &status->n_polerr, 31, 0, 4);
sja1105_unpack(p + 0x7, &status->n_rxfrmsh, 31, 0, 4);
sja1105_unpack(p + 0x6, &status->n_rxfrm, 31, 0, 4);
sja1105_unpack(p + 0x5, &status->n_rxbytesh, 31, 0, 4);
sja1105_unpack(p + 0x4, &status->n_rxbyte, 31, 0, 4);
sja1105_unpack(p + 0x3, &status->n_txfrmsh, 31, 0, 4);
sja1105_unpack(p + 0x2, &status->n_txfrm, 31, 0, 4);
sja1105_unpack(p + 0x1, &status->n_txbytesh, 31, 0, 4);
sja1105_unpack(p + 0x0, &status->n_txbyte, 31, 0, 4);
status->n_rxfrm += status->n_rxfrmsh << 32;
status->n_rxbyte += status->n_rxbytesh << 32;
status->n_txfrm += status->n_txfrmsh << 32;
status->n_txbyte += status->n_txbytesh << 32;
}
static void
sja1105_port_status_hl2_unpack(void *buf,
struct sja1105_port_status_hl2 *status)
{
/* Make pointer arithmetic work on 4 bytes */
u32 *p = buf;
sja1105_unpack(p + 0x3, &status->n_qfull, 31, 0, 4);
sja1105_unpack(p + 0x2, &status->n_part_drop, 31, 0, 4);
sja1105_unpack(p + 0x1, &status->n_egr_disabled, 31, 0, 4);
sja1105_unpack(p + 0x0, &status->n_not_reach, 31, 0, 4);
}
static void
sja1105pqrs_port_status_qlevel_unpack(void *buf,
struct sja1105_port_status_hl2 *status)
{
/* Make pointer arithmetic work on 4 bytes */
u32 *p = buf;
int i;
for (i = 0; i < 8; i++) {
sja1105_unpack(p + i, &status->qlevel_hwm[i], 24, 16, 4);
sja1105_unpack(p + i, &status->qlevel[i], 8, 0, 4);
}
}
static int sja1105_port_status_get_mac(struct sja1105_private *priv,
struct sja1105_port_status_mac *status,
int port)
{
const struct sja1105_regs *regs = priv->info->regs;
u8 packed_buf[SJA1105_SIZE_MAC_AREA] = {0};
int rc;
/* MAC area */
rc = sja1105_spi_send_packed_buf(priv, SPI_READ, regs->mac[port],
packed_buf, SJA1105_SIZE_MAC_AREA);
if (rc < 0)
return rc;
sja1105_port_status_mac_unpack(packed_buf, status);
return 0;
}
static int sja1105_port_status_get_hl1(struct sja1105_private *priv,
struct sja1105_port_status_hl1 *status,
int port)
{
const struct sja1105_regs *regs = priv->info->regs;
u8 packed_buf[SJA1105_SIZE_HL1_AREA] = {0};
int rc;
rc = sja1105_spi_send_packed_buf(priv, SPI_READ, regs->mac_hl1[port],
packed_buf, SJA1105_SIZE_HL1_AREA);
if (rc < 0)
return rc;
sja1105_port_status_hl1_unpack(packed_buf, status);
return 0;
}
static int sja1105_port_status_get_hl2(struct sja1105_private *priv,
struct sja1105_port_status_hl2 *status,
int port)
{
const struct sja1105_regs *regs = priv->info->regs;
u8 packed_buf[SJA1105_SIZE_QLEVEL_AREA] = {0};
int rc;
rc = sja1105_spi_send_packed_buf(priv, SPI_READ, regs->mac_hl2[port],
packed_buf, SJA1105_SIZE_HL2_AREA);
if (rc < 0)
return rc;
sja1105_port_status_hl2_unpack(packed_buf, status);
/* Code below is strictly P/Q/R/S specific. */
if (priv->info->device_id == SJA1105E_DEVICE_ID ||
priv->info->device_id == SJA1105T_DEVICE_ID)
return 0;
rc = sja1105_spi_send_packed_buf(priv, SPI_READ, regs->qlevel[port],
packed_buf, SJA1105_SIZE_QLEVEL_AREA);
if (rc < 0)
return rc;
sja1105pqrs_port_status_qlevel_unpack(packed_buf, status);
return 0;
}
static int sja1105_port_status_get(struct sja1105_private *priv,
struct sja1105_port_status *status,
int port)
{
int rc;
rc = sja1105_port_status_get_mac(priv, &status->mac, port);
if (rc < 0)
return rc;
rc = sja1105_port_status_get_hl1(priv, &status->hl1, port);
if (rc < 0)
return rc;
rc = sja1105_port_status_get_hl2(priv, &status->hl2, port);
if (rc < 0)
return rc;
return 0;
}
static char sja1105_port_stats[][ETH_GSTRING_LEN] = {
/* MAC-Level Diagnostic Counters */
"n_runt",
"n_soferr",
"n_alignerr",
"n_miierr",
/* MAC-Level Diagnostic Flags */
"typeerr",
"sizeerr",
"tctimeout",
"priorerr",
"nomaster",
"memov",
"memerr",
"invtyp",
"intcyov",
"domerr",
"pcfbagdrop",
"spcprior",
"ageprior",
"portdrop",
"lendrop",
"bagdrop",
"policeerr",
"drpnona664err",
"spcerr",
"agedrp",
/* High-Level Diagnostic Counters */
"n_n664err",
"n_vlanerr",
"n_unreleased",
"n_sizeerr",
"n_crcerr",
"n_vlnotfound",
"n_ctpolerr",
"n_polerr",
"n_rxfrm",
"n_rxbyte",
"n_txfrm",
"n_txbyte",
"n_qfull",
"n_part_drop",
"n_egr_disabled",
"n_not_reach",
};
static char sja1105pqrs_extra_port_stats[][ETH_GSTRING_LEN] = {
/* Queue Levels */
"qlevel_hwm_0",
"qlevel_hwm_1",
"qlevel_hwm_2",
"qlevel_hwm_3",
"qlevel_hwm_4",
"qlevel_hwm_5",
"qlevel_hwm_6",
"qlevel_hwm_7",
"qlevel_0",
"qlevel_1",
"qlevel_2",
"qlevel_3",
"qlevel_4",
"qlevel_5",
"qlevel_6",
"qlevel_7",
};
void sja1105_get_ethtool_stats(struct dsa_switch *ds, int port, u64 *data)
{
struct sja1105_private *priv = ds->priv;
struct sja1105_port_status status = {0};
int rc, i, k = 0;
rc = sja1105_port_status_get(priv, &status, port);
if (rc < 0) {
dev_err(ds->dev, "Failed to read port %d counters: %d\n",
port, rc);
return;
}
memset(data, 0, ARRAY_SIZE(sja1105_port_stats) * sizeof(u64));
data[k++] = status.mac.n_runt;
data[k++] = status.mac.n_soferr;
data[k++] = status.mac.n_alignerr;
data[k++] = status.mac.n_miierr;
data[k++] = status.mac.typeerr;
data[k++] = status.mac.sizeerr;
data[k++] = status.mac.tctimeout;
data[k++] = status.mac.priorerr;
data[k++] = status.mac.nomaster;
data[k++] = status.mac.memov;
data[k++] = status.mac.memerr;
data[k++] = status.mac.invtyp;
data[k++] = status.mac.intcyov;
data[k++] = status.mac.domerr;
data[k++] = status.mac.pcfbagdrop;
data[k++] = status.mac.spcprior;
data[k++] = status.mac.ageprior;
data[k++] = status.mac.portdrop;
data[k++] = status.mac.lendrop;
data[k++] = status.mac.bagdrop;
data[k++] = status.mac.policeerr;
data[k++] = status.mac.drpnona664err;
data[k++] = status.mac.spcerr;
data[k++] = status.mac.agedrp;
data[k++] = status.hl1.n_n664err;
data[k++] = status.hl1.n_vlanerr;
data[k++] = status.hl1.n_unreleased;
data[k++] = status.hl1.n_sizeerr;
data[k++] = status.hl1.n_crcerr;
data[k++] = status.hl1.n_vlnotfound;
data[k++] = status.hl1.n_ctpolerr;
data[k++] = status.hl1.n_polerr;
data[k++] = status.hl1.n_rxfrm;
data[k++] = status.hl1.n_rxbyte;
data[k++] = status.hl1.n_txfrm;
data[k++] = status.hl1.n_txbyte;
data[k++] = status.hl2.n_qfull;
data[k++] = status.hl2.n_part_drop;
data[k++] = status.hl2.n_egr_disabled;
data[k++] = status.hl2.n_not_reach;
if (priv->info->device_id == SJA1105E_DEVICE_ID ||
priv->info->device_id == SJA1105T_DEVICE_ID)
return;
memset(data + k, 0, ARRAY_SIZE(sja1105pqrs_extra_port_stats) *
sizeof(u64));
for (i = 0; i < 8; i++) {
data[k++] = status.hl2.qlevel_hwm[i];
data[k++] = status.hl2.qlevel[i];
}
}
void sja1105_get_strings(struct dsa_switch *ds, int port,
u32 stringset, u8 *data)
{
struct sja1105_private *priv = ds->priv;
u8 *p = data;
int i;
switch (stringset) {
case ETH_SS_STATS:
for (i = 0; i < ARRAY_SIZE(sja1105_port_stats); i++) {
strlcpy(p, sja1105_port_stats[i], ETH_GSTRING_LEN);
p += ETH_GSTRING_LEN;
}
if (priv->info->device_id == SJA1105E_DEVICE_ID ||
priv->info->device_id == SJA1105T_DEVICE_ID)
return;
for (i = 0; i < ARRAY_SIZE(sja1105pqrs_extra_port_stats); i++) {
strlcpy(p, sja1105pqrs_extra_port_stats[i],
ETH_GSTRING_LEN);
p += ETH_GSTRING_LEN;
}
break;
}
}
int sja1105_get_sset_count(struct dsa_switch *ds, int port, int sset)
{
int count = ARRAY_SIZE(sja1105_port_stats);
struct sja1105_private *priv = ds->priv;
if (sset != ETH_SS_STATS)
return -EOPNOTSUPP;
if (priv->info->device_id == SJA1105PR_DEVICE_ID ||
priv->info->device_id == SJA1105QS_DEVICE_ID)
count += ARRAY_SIZE(sja1105pqrs_extra_port_stats);
return count;
}
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2018, Sensor-Technik Wiedemann GmbH
* Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/delay.h>
#include <linux/module.h>
#include <linux/printk.h>
#include <linux/spi/spi.h>
#include <linux/errno.h>
#include <linux/gpio/consumer.h>
#include <linux/phylink.h>
#include <linux/of.h>
#include <linux/of_net.h>
#include <linux/of_mdio.h>
#include <linux/of_device.h>
#include <linux/netdev_features.h>
#include <linux/netdevice.h>
#include <linux/if_bridge.h>
#include <linux/if_ether.h>
#include "sja1105.h"
static void sja1105_hw_reset(struct gpio_desc *gpio, unsigned int pulse_len,
unsigned int startup_delay)
{
gpiod_set_value_cansleep(gpio, 1);
/* Wait for minimum reset pulse length */
msleep(pulse_len);
gpiod_set_value_cansleep(gpio, 0);
/* Wait until chip is ready after reset */
msleep(startup_delay);
}
static void
sja1105_port_allow_traffic(struct sja1105_l2_forwarding_entry *l2_fwd,
int from, int to, bool allow)
{
if (allow) {
l2_fwd[from].bc_domain |= BIT(to);
l2_fwd[from].reach_port |= BIT(to);
l2_fwd[from].fl_domain |= BIT(to);
} else {
l2_fwd[from].bc_domain &= ~BIT(to);
l2_fwd[from].reach_port &= ~BIT(to);
l2_fwd[from].fl_domain &= ~BIT(to);
}
}
/* Structure used to temporarily transport device tree
* settings into sja1105_setup
*/
struct sja1105_dt_port {
phy_interface_t phy_mode;
sja1105_mii_role_t role;
};
static int sja1105_init_mac_settings(struct sja1105_private *priv)
{
struct sja1105_mac_config_entry default_mac = {
/* Enable all 8 priority queues on egress.
* Every queue i holds top[i] - base[i] frames.
* Sum of top[i] - base[i] is 511 (max hardware limit).
*/
.top = {0x3F, 0x7F, 0xBF, 0xFF, 0x13F, 0x17F, 0x1BF, 0x1FF},
.base = {0x0, 0x40, 0x80, 0xC0, 0x100, 0x140, 0x180, 0x1C0},
.enabled = {true, true, true, true, true, true, true, true},
/* Keep standard IFG of 12 bytes on egress. */
.ifg = 0,
/* Always put the MAC speed in automatic mode, where it can be
* retrieved from the PHY object through phylib and
* sja1105_adjust_port_config.
*/
.speed = SJA1105_SPEED_AUTO,
/* No static correction for 1-step 1588 events */
.tp_delin = 0,
.tp_delout = 0,
/* Disable aging for critical TTEthernet traffic */
.maxage = 0xFF,
/* Internal VLAN (pvid) to apply to untagged ingress */
.vlanprio = 0,
.vlanid = 0,
.ing_mirr = false,
.egr_mirr = false,
/* Don't drop traffic with other EtherType than ETH_P_IP */
.drpnona664 = false,
/* Don't drop double-tagged traffic */
.drpdtag = false,
/* Don't drop untagged traffic */
.drpuntag = false,
/* Don't retag 802.1p (VID 0) traffic with the pvid */
.retag = false,
/* Enable learning and I/O on user ports by default. */
.dyn_learn = true,
.egress = false,
.ingress = false,
};
struct sja1105_mac_config_entry *mac;
struct sja1105_table *table;
int i;
table = &priv->static_config.tables[BLK_IDX_MAC_CONFIG];
/* Discard previous MAC Configuration Table */
if (table->entry_count) {
kfree(table->entries);
table->entry_count = 0;
}
table->entries = kcalloc(SJA1105_NUM_PORTS,
table->ops->unpacked_entry_size, GFP_KERNEL);
if (!table->entries)
return -ENOMEM;
/* Override table based on phylib DT bindings */
table->entry_count = SJA1105_NUM_PORTS;
mac = table->entries;
for (i = 0; i < SJA1105_NUM_PORTS; i++)
mac[i] = default_mac;
return 0;
}
static int sja1105_init_mii_settings(struct sja1105_private *priv,
struct sja1105_dt_port *ports)
{
struct device *dev = &priv->spidev->dev;
struct sja1105_xmii_params_entry *mii;
struct sja1105_table *table;
int i;
table = &priv->static_config.tables[BLK_IDX_XMII_PARAMS];
/* Discard previous xMII Mode Parameters Table */
if (table->entry_count) {
kfree(table->entries);
table->entry_count = 0;
}
table->entries = kcalloc(SJA1105_MAX_XMII_PARAMS_COUNT,
table->ops->unpacked_entry_size, GFP_KERNEL);
if (!table->entries)
return -ENOMEM;
/* Override table based on phylib DT bindings */
table->entry_count = SJA1105_MAX_XMII_PARAMS_COUNT;
mii = table->entries;
for (i = 0; i < SJA1105_NUM_PORTS; i++) {
switch (ports[i].phy_mode) {
case PHY_INTERFACE_MODE_MII:
mii->xmii_mode[i] = XMII_MODE_MII;
break;
case PHY_INTERFACE_MODE_RMII:
mii->xmii_mode[i] = XMII_MODE_RMII;
break;
case PHY_INTERFACE_MODE_RGMII:
case PHY_INTERFACE_MODE_RGMII_ID:
case PHY_INTERFACE_MODE_RGMII_RXID:
case PHY_INTERFACE_MODE_RGMII_TXID:
mii->xmii_mode[i] = XMII_MODE_RGMII;
break;
default:
dev_err(dev, "Unsupported PHY mode %s!\n",
phy_modes(ports[i].phy_mode));
}
mii->phy_mac[i] = ports[i].role;
}
return 0;
}
static int sja1105_init_static_fdb(struct sja1105_private *priv)
{
struct sja1105_table *table;
table = &priv->static_config.tables[BLK_IDX_L2_LOOKUP];
/* We only populate the FDB table through dynamic
* L2 Address Lookup entries
*/
if (table->entry_count) {
kfree(table->entries);
table->entry_count = 0;
}
return 0;
}
static int sja1105_init_l2_lookup_params(struct sja1105_private *priv)
{
struct sja1105_table *table;
struct sja1105_l2_lookup_params_entry default_l2_lookup_params = {
/* Learned FDB entries are forgotten after 300 seconds */
.maxage = SJA1105_AGEING_TIME_MS(300000),
/* All entries within a FDB bin are available for learning */
.dyn_tbsz = SJA1105ET_FDB_BIN_SIZE,
/* 2^8 + 2^5 + 2^3 + 2^2 + 2^1 + 1 in Koopman notation */
.poly = 0x97,
/* This selects between Independent VLAN Learning (IVL) and
* Shared VLAN Learning (SVL)
*/
.shared_learn = false,
/* Don't discard management traffic based on ENFPORT -
* we don't perform SMAC port enforcement anyway, so
* what we are setting here doesn't matter.
*/
.no_enf_hostprt = false,
/* Don't learn SMAC for mac_fltres1 and mac_fltres0.
* Maybe correlate with no_linklocal_learn from bridge driver?
*/
.no_mgmt_learn = true,
};
table = &priv->static_config.tables[BLK_IDX_L2_LOOKUP_PARAMS];
if (table->entry_count) {
kfree(table->entries);
table->entry_count = 0;
}
table->entries = kcalloc(SJA1105_MAX_L2_LOOKUP_PARAMS_COUNT,
table->ops->unpacked_entry_size, GFP_KERNEL);
if (!table->entries)
return -ENOMEM;
table->entry_count = SJA1105_MAX_L2_LOOKUP_PARAMS_COUNT;
/* This table only has a single entry */
((struct sja1105_l2_lookup_params_entry *)table->entries)[0] =
default_l2_lookup_params;
return 0;
}
static int sja1105_init_static_vlan(struct sja1105_private *priv)
{
struct sja1105_table *table;
struct sja1105_vlan_lookup_entry pvid = {
.ving_mirr = 0,
.vegr_mirr = 0,
.vmemb_port = 0,
.vlan_bc = 0,
.tag_port = 0,
.vlanid = 0,
};
int i;
table = &priv->static_config.tables[BLK_IDX_VLAN_LOOKUP];
/* The static VLAN table will only contain the initial pvid of 0.
* All other VLANs are to be configured through dynamic entries,
* and kept in the static configuration table as backing memory.
* The pvid of 0 is sufficient to pass traffic while the ports are
* standalone and when vlan_filtering is disabled. When filtering
* gets enabled, the switchdev core sets up the VLAN ID 1 and sets
* it as the new pvid. Actually 'pvid 1' still comes up in 'bridge
* vlan' even when vlan_filtering is off, but it has no effect.
*/
if (table->entry_count) {
kfree(table->entries);
table->entry_count = 0;
}
table->entries = kcalloc(1, table->ops->unpacked_entry_size,
GFP_KERNEL);
if (!table->entries)
return -ENOMEM;
table->entry_count = 1;
/* VLAN ID 0: all DT-defined ports are members; no restrictions on
* forwarding; always transmit priority-tagged frames as untagged.
*/
for (i = 0; i < SJA1105_NUM_PORTS; i++) {
pvid.vmemb_port |= BIT(i);
pvid.vlan_bc |= BIT(i);
pvid.tag_port &= ~BIT(i);
}
((struct sja1105_vlan_lookup_entry *)table->entries)[0] = pvid;
return 0;
}
static int sja1105_init_l2_forwarding(struct sja1105_private *priv)
{
struct sja1105_l2_forwarding_entry *l2fwd;
struct sja1105_table *table;
int i, j;
table = &priv->static_config.tables[BLK_IDX_L2_FORWARDING];
if (table->entry_count) {
kfree(table->entries);
table->entry_count = 0;
}
table->entries = kcalloc(SJA1105_MAX_L2_FORWARDING_COUNT,
table->ops->unpacked_entry_size, GFP_KERNEL);
if (!table->entries)
return -ENOMEM;
table->entry_count = SJA1105_MAX_L2_FORWARDING_COUNT;
l2fwd = table->entries;
/* First 5 entries define the forwarding rules */
for (i = 0; i < SJA1105_NUM_PORTS; i++) {
unsigned int upstream = dsa_upstream_port(priv->ds, i);
for (j = 0; j < SJA1105_NUM_TC; j++)
l2fwd[i].vlan_pmap[j] = j;
if (i == upstream)
continue;
sja1105_port_allow_traffic(l2fwd, i, upstream, true);
sja1105_port_allow_traffic(l2fwd, upstream, i, true);
}
/* Next 8 entries define VLAN PCP mapping from ingress to egress.
* Create a one-to-one mapping.
*/
for (i = 0; i < SJA1105_NUM_TC; i++)
for (j = 0; j < SJA1105_NUM_PORTS; j++)
l2fwd[SJA1105_NUM_PORTS + i].vlan_pmap[j] = i;
return 0;
}
static int sja1105_init_l2_forwarding_params(struct sja1105_private *priv)
{
struct sja1105_l2_forwarding_params_entry default_l2fwd_params = {
/* Disallow dynamic reconfiguration of vlan_pmap */
.max_dynp = 0,
/* Use a single memory partition for all ingress queues */
.part_spc = { SJA1105_MAX_FRAME_MEMORY, 0, 0, 0, 0, 0, 0, 0 },
};
struct sja1105_table *table;
table = &priv->static_config.tables[BLK_IDX_L2_FORWARDING_PARAMS];
if (table->entry_count) {
kfree(table->entries);
table->entry_count = 0;
}
table->entries = kcalloc(SJA1105_MAX_L2_FORWARDING_PARAMS_COUNT,
table->ops->unpacked_entry_size, GFP_KERNEL);
if (!table->entries)
return -ENOMEM;
table->entry_count = SJA1105_MAX_L2_FORWARDING_PARAMS_COUNT;
/* This table only has a single entry */
((struct sja1105_l2_forwarding_params_entry *)table->entries)[0] =
default_l2fwd_params;
return 0;
}
static int sja1105_init_general_params(struct sja1105_private *priv)
{
struct sja1105_general_params_entry default_general_params = {
/* Disallow dynamic changing of the mirror port */
.mirr_ptacu = 0,
.switchid = priv->ds->index,
/* Priority queue for link-local frames trapped to CPU */
.hostprio = 0,
.mac_fltres1 = SJA1105_LINKLOCAL_FILTER_A,
.mac_flt1 = SJA1105_LINKLOCAL_FILTER_A_MASK,
.incl_srcpt1 = true,
.send_meta1 = false,
.mac_fltres0 = SJA1105_LINKLOCAL_FILTER_B,
.mac_flt0 = SJA1105_LINKLOCAL_FILTER_B_MASK,
.incl_srcpt0 = true,
.send_meta0 = false,
/* The destination for traffic matching mac_fltres1 and
* mac_fltres0 on all ports except host_port. Such traffic
* receieved on host_port itself would be dropped, except
* by installing a temporary 'management route'
*/
.host_port = dsa_upstream_port(priv->ds, 0),
/* Same as host port */
.mirr_port = dsa_upstream_port(priv->ds, 0),
/* Link-local traffic received on casc_port will be forwarded
* to host_port without embedding the source port and device ID
* info in the destination MAC address (presumably because it
* is a cascaded port and a downstream SJA switch already did
* that). Default to an invalid port (to disable the feature)
* and overwrite this if we find any DSA (cascaded) ports.
*/
.casc_port = SJA1105_NUM_PORTS,
/* No TTEthernet */
.vllupformat = 0,
.vlmarker = 0,
.vlmask = 0,
/* Only update correctionField for 1-step PTP (L2 transport) */
.ignore2stf = 0,
/* Forcefully disable VLAN filtering by telling
* the switch that VLAN has a different EtherType.
*/
.tpid = ETH_P_SJA1105,
.tpid2 = ETH_P_SJA1105,
};
struct sja1105_table *table;
int i;
for (i = 0; i < SJA1105_NUM_PORTS; i++)
if (dsa_is_dsa_port(priv->ds, i))
default_general_params.casc_port = i;
table = &priv->static_config.tables[BLK_IDX_GENERAL_PARAMS];
if (table->entry_count) {
kfree(table->entries);
table->entry_count = 0;
}
table->entries = kcalloc(SJA1105_MAX_GENERAL_PARAMS_COUNT,
table->ops->unpacked_entry_size, GFP_KERNEL);
if (!table->entries)
return -ENOMEM;
table->entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT;
/* This table only has a single entry */
((struct sja1105_general_params_entry *)table->entries)[0] =
default_general_params;
return 0;
}
#define SJA1105_RATE_MBPS(speed) (((speed) * 64000) / 1000)
static inline void
sja1105_setup_policer(struct sja1105_l2_policing_entry *policing,
int index)
{
policing[index].sharindx = index;
policing[index].smax = 65535; /* Burst size in bytes */
policing[index].rate = SJA1105_RATE_MBPS(1000);
policing[index].maxlen = ETH_FRAME_LEN + VLAN_HLEN + ETH_FCS_LEN;
policing[index].partition = 0;
}
static int sja1105_init_l2_policing(struct sja1105_private *priv)
{
struct sja1105_l2_policing_entry *policing;
struct sja1105_table *table;
int i, j, k;
table = &priv->static_config.tables[BLK_IDX_L2_POLICING];
/* Discard previous L2 Policing Table */
if (table->entry_count) {
kfree(table->entries);
table->entry_count = 0;
}
table->entries = kcalloc(SJA1105_MAX_L2_POLICING_COUNT,
table->ops->unpacked_entry_size, GFP_KERNEL);
if (!table->entries)
return -ENOMEM;
table->entry_count = SJA1105_MAX_L2_POLICING_COUNT;
policing = table->entries;
/* k sweeps through all unicast policers (0-39).
* bcast sweeps through policers 40-44.
*/
for (i = 0, k = 0; i < SJA1105_NUM_PORTS; i++) {
int bcast = (SJA1105_NUM_PORTS * SJA1105_NUM_TC) + i;
for (j = 0; j < SJA1105_NUM_TC; j++, k++)
sja1105_setup_policer(policing, k);
/* Set up this port's policer for broadcast traffic */
sja1105_setup_policer(policing, bcast);
}
return 0;
}
static int sja1105_static_config_load(struct sja1105_private *priv,
struct sja1105_dt_port *ports)
{
int rc;
sja1105_static_config_free(&priv->static_config);
rc = sja1105_static_config_init(&priv->static_config,
priv->info->static_ops,
priv->info->device_id);
if (rc)
return rc;
/* Build static configuration */
rc = sja1105_init_mac_settings(priv);
if (rc < 0)
return rc;
rc = sja1105_init_mii_settings(priv, ports);
if (rc < 0)
return rc;
rc = sja1105_init_static_fdb(priv);
if (rc < 0)
return rc;
rc = sja1105_init_static_vlan(priv);
if (rc < 0)
return rc;
rc = sja1105_init_l2_lookup_params(priv);
if (rc < 0)
return rc;
rc = sja1105_init_l2_forwarding(priv);
if (rc < 0)
return rc;
rc = sja1105_init_l2_forwarding_params(priv);
if (rc < 0)
return rc;
rc = sja1105_init_l2_policing(priv);
if (rc < 0)
return rc;
rc = sja1105_init_general_params(priv);
if (rc < 0)
return rc;
/* Send initial configuration to hardware via SPI */
return sja1105_static_config_upload(priv);
}
static int sja1105_parse_rgmii_delays(struct sja1105_private *priv,
const struct sja1105_dt_port *ports)
{
int i;
for (i = 0; i < SJA1105_NUM_PORTS; i++) {
if (ports->role == XMII_MAC)
continue;
if (ports->phy_mode == PHY_INTERFACE_MODE_RGMII_RXID ||
ports->phy_mode == PHY_INTERFACE_MODE_RGMII_ID)
priv->rgmii_rx_delay[i] = true;
if (ports->phy_mode == PHY_INTERFACE_MODE_RGMII_TXID ||
ports->phy_mode == PHY_INTERFACE_MODE_RGMII_ID)
priv->rgmii_tx_delay[i] = true;
if ((priv->rgmii_rx_delay[i] || priv->rgmii_tx_delay[i]) &&
!priv->info->setup_rgmii_delay)
return -EINVAL;
}
return 0;
}
static int sja1105_parse_ports_node(struct sja1105_private *priv,
struct sja1105_dt_port *ports,
struct device_node *ports_node)
{
struct device *dev = &priv->spidev->dev;
struct device_node *child;
for_each_child_of_node(ports_node, child) {
struct device_node *phy_node;
int phy_mode;
u32 index;
/* Get switch port number from DT */
if (of_property_read_u32(child, "reg", &index) < 0) {
dev_err(dev, "Port number not defined in device tree "
"(property \"reg\")\n");
return -ENODEV;
}
/* Get PHY mode from DT */
phy_mode = of_get_phy_mode(child);
if (phy_mode < 0) {
dev_err(dev, "Failed to read phy-mode or "
"phy-interface-type property for port %d\n",
index);
return -ENODEV;
}
ports[index].phy_mode = phy_mode;
phy_node = of_parse_phandle(child, "phy-handle", 0);
if (!phy_node) {
if (!of_phy_is_fixed_link(child)) {
dev_err(dev, "phy-handle or fixed-link "
"properties missing!\n");
return -ENODEV;
}
/* phy-handle is missing, but fixed-link isn't.
* So it's a fixed link. Default to PHY role.
*/
ports[index].role = XMII_PHY;
} else {
/* phy-handle present => put port in MAC role */
ports[index].role = XMII_MAC;
of_node_put(phy_node);
}
/* The MAC/PHY role can be overridden with explicit bindings */
if (of_property_read_bool(child, "sja1105,role-mac"))
ports[index].role = XMII_MAC;
else if (of_property_read_bool(child, "sja1105,role-phy"))
ports[index].role = XMII_PHY;
}
return 0;
}
static int sja1105_parse_dt(struct sja1105_private *priv,
struct sja1105_dt_port *ports)
{
struct device *dev = &priv->spidev->dev;
struct device_node *switch_node = dev->of_node;
struct device_node *ports_node;
int rc;
ports_node = of_get_child_by_name(switch_node, "ports");
if (!ports_node) {
dev_err(dev, "Incorrect bindings: absent \"ports\" node\n");
return -ENODEV;
}
rc = sja1105_parse_ports_node(priv, ports, ports_node);
of_node_put(ports_node);
return rc;
}
/* Convert back and forth MAC speed from Mbps to SJA1105 encoding */
static int sja1105_speed[] = {
[SJA1105_SPEED_AUTO] = 0,
[SJA1105_SPEED_10MBPS] = 10,
[SJA1105_SPEED_100MBPS] = 100,
[SJA1105_SPEED_1000MBPS] = 1000,
};
static sja1105_speed_t sja1105_get_speed_cfg(unsigned int speed_mbps)
{
int i;
for (i = SJA1105_SPEED_AUTO; i <= SJA1105_SPEED_1000MBPS; i++)
if (sja1105_speed[i] == speed_mbps)
return i;
return -EINVAL;
}
/* Set link speed and enable/disable traffic I/O in the MAC configuration
* for a specific port.
*
* @speed_mbps: If 0, leave the speed unchanged, else adapt MAC to PHY speed.
* @enabled: Manage Rx and Tx settings for this port. Overrides the static
* configuration settings.
*/
static int sja1105_adjust_port_config(struct sja1105_private *priv, int port,
int speed_mbps, bool enabled)
{
struct sja1105_xmii_params_entry *mii;
struct sja1105_mac_config_entry *mac;
struct device *dev = priv->ds->dev;
sja1105_phy_interface_t phy_mode;
sja1105_speed_t speed;
int rc;
mii = priv->static_config.tables[BLK_IDX_XMII_PARAMS].entries;
mac = priv->static_config.tables[BLK_IDX_MAC_CONFIG].entries;
speed = sja1105_get_speed_cfg(speed_mbps);
if (speed_mbps && speed < 0) {
dev_err(dev, "Invalid speed %iMbps\n", speed_mbps);
return -EINVAL;
}
/* If requested, overwrite SJA1105_SPEED_AUTO from the static MAC
* configuration table, since this will be used for the clocking setup,
* and we no longer need to store it in the static config (already told
* hardware we want auto during upload phase).
*/
if (speed_mbps)
mac[port].speed = speed;
else
mac[port].speed = SJA1105_SPEED_AUTO;
/* On P/Q/R/S, one can read from the device via the MAC reconfiguration
* tables. On E/T, MAC reconfig tables are not readable, only writable.
* We have to *know* what the MAC looks like. For the sake of keeping
* the code common, we'll use the static configuration tables as a
* reasonable approximation for both E/T and P/Q/R/S.
*/
mac[port].ingress = enabled;
mac[port].egress = enabled;
/* Write to the dynamic reconfiguration tables */
rc = sja1105_dynamic_config_write(priv, BLK_IDX_MAC_CONFIG,
port, &mac[port], true);
if (rc < 0) {
dev_err(dev, "Failed to write MAC config: %d\n", rc);
return rc;
}
/* Reconfigure the PLLs for the RGMII interfaces (required 125 MHz at
* gigabit, 25 MHz at 100 Mbps and 2.5 MHz at 10 Mbps). For MII and
* RMII no change of the clock setup is required. Actually, changing
* the clock setup does interrupt the clock signal for a certain time
* which causes trouble for all PHYs relying on this signal.
*/
if (!enabled)
return 0;
phy_mode = mii->xmii_mode[port];
if (phy_mode != XMII_MODE_RGMII)
return 0;
return sja1105_clocking_setup_port(priv, port);
}
static void sja1105_adjust_link(struct dsa_switch *ds, int port,
struct phy_device *phydev)
{
struct sja1105_private *priv = ds->priv;
if (!phydev->link)
sja1105_adjust_port_config(priv, port, 0, false);
else
sja1105_adjust_port_config(priv, port, phydev->speed, true);
}
static void sja1105_phylink_validate(struct dsa_switch *ds, int port,
unsigned long *supported,
struct phylink_link_state *state)
{
/* Construct a new mask which exhaustively contains all link features
* supported by the MAC, and then apply that (logical AND) to what will
* be sent to the PHY for "marketing".
*/
__ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0, };
struct sja1105_private *priv = ds->priv;
struct sja1105_xmii_params_entry *mii;
mii = priv->static_config.tables[BLK_IDX_XMII_PARAMS].entries;
/* The MAC does not support pause frames, and also doesn't
* support half-duplex traffic modes.
*/
phylink_set(mask, Autoneg);
phylink_set(mask, MII);
phylink_set(mask, 10baseT_Full);
phylink_set(mask, 100baseT_Full);
if (mii->xmii_mode[port] == XMII_MODE_RGMII)
phylink_set(mask, 1000baseT_Full);
bitmap_and(supported, supported, mask, __ETHTOOL_LINK_MODE_MASK_NBITS);
bitmap_and(state->advertising, state->advertising, mask,
__ETHTOOL_LINK_MODE_MASK_NBITS);
}
/* First-generation switches have a 4-way set associative TCAM that
* holds the FDB entries. An FDB index spans from 0 to 1023 and is comprised of
* a "bin" (grouping of 4 entries) and a "way" (an entry within a bin).
* For the placement of a newly learnt FDB entry, the switch selects the bin
* based on a hash function, and the way within that bin incrementally.
*/
static inline int sja1105et_fdb_index(int bin, int way)
{
return bin * SJA1105ET_FDB_BIN_SIZE + way;
}
static int sja1105_is_fdb_entry_in_bin(struct sja1105_private *priv, int bin,
const u8 *addr, u16 vid,
struct sja1105_l2_lookup_entry *match,
int *last_unused)
{
int way;
for (way = 0; way < SJA1105ET_FDB_BIN_SIZE; way++) {
struct sja1105_l2_lookup_entry l2_lookup = {0};
int index = sja1105et_fdb_index(bin, way);
/* Skip unused entries, optionally marking them
* into the return value
*/
if (sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,
index, &l2_lookup)) {
if (last_unused)
*last_unused = way;
continue;
}
if (l2_lookup.macaddr == ether_addr_to_u64(addr) &&
l2_lookup.vlanid == vid) {
if (match)
*match = l2_lookup;
return way;
}
}
/* Return an invalid entry index if not found */
return -1;
}
static int sja1105_fdb_add(struct dsa_switch *ds, int port,
const unsigned char *addr, u16 vid)
{
struct sja1105_l2_lookup_entry l2_lookup = {0};
struct sja1105_private *priv = ds->priv;
struct device *dev = ds->dev;
int last_unused = -1;
int bin, way;
bin = sja1105_fdb_hash(priv, addr, vid);
way = sja1105_is_fdb_entry_in_bin(priv, bin, addr, vid,
&l2_lookup, &last_unused);
if (way >= 0) {
/* We have an FDB entry. Is our port in the destination
* mask? If yes, we need to do nothing. If not, we need
* to rewrite the entry by adding this port to it.
*/
if (l2_lookup.destports & BIT(port))
return 0;
l2_lookup.destports |= BIT(port);
} else {
int index = sja1105et_fdb_index(bin, way);
/* We don't have an FDB entry. We construct a new one and
* try to find a place for it within the FDB table.
*/
l2_lookup.macaddr = ether_addr_to_u64(addr);
l2_lookup.destports = BIT(port);
l2_lookup.vlanid = vid;
if (last_unused >= 0) {
way = last_unused;
} else {
/* Bin is full, need to evict somebody.
* Choose victim at random. If you get these messages
* often, you may need to consider changing the
* distribution function:
* static_config[BLK_IDX_L2_LOOKUP_PARAMS].entries->poly
*/
get_random_bytes(&way, sizeof(u8));
way %= SJA1105ET_FDB_BIN_SIZE;
dev_warn(dev, "Warning, FDB bin %d full while adding entry for %pM. Evicting entry %u.\n",
bin, addr, way);
/* Evict entry */
sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
index, NULL, false);
}
}
l2_lookup.index = sja1105et_fdb_index(bin, way);
return sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
l2_lookup.index, &l2_lookup,
true);
}
static int sja1105_fdb_del(struct dsa_switch *ds, int port,
const unsigned char *addr, u16 vid)
{
struct sja1105_l2_lookup_entry l2_lookup = {0};
struct sja1105_private *priv = ds->priv;
int index, bin, way;
bool keep;
bin = sja1105_fdb_hash(priv, addr, vid);
way = sja1105_is_fdb_entry_in_bin(priv, bin, addr, vid,
&l2_lookup, NULL);
if (way < 0)
return 0;
index = sja1105et_fdb_index(bin, way);
/* We have an FDB entry. Is our port in the destination mask? If yes,
* we need to remove it. If the resulting port mask becomes empty, we
* need to completely evict the FDB entry.
* Otherwise we just write it back.
*/
if (l2_lookup.destports & BIT(port))
l2_lookup.destports &= ~BIT(port);
if (l2_lookup.destports)
keep = true;
else
keep = false;
return sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
index, &l2_lookup, keep);
}
static int sja1105_fdb_dump(struct dsa_switch *ds, int port,
dsa_fdb_dump_cb_t *cb, void *data)
{
struct sja1105_private *priv = ds->priv;
struct device *dev = ds->dev;
int i;
for (i = 0; i < SJA1105_MAX_L2_LOOKUP_COUNT; i++) {
struct sja1105_l2_lookup_entry l2_lookup = {0};
u8 macaddr[ETH_ALEN];
int rc;
rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,
i, &l2_lookup);
/* No fdb entry at i, not an issue */
if (rc == -EINVAL)
continue;
if (rc) {
dev_err(dev, "Failed to dump FDB: %d\n", rc);
return rc;
}
/* FDB dump callback is per port. This means we have to
* disregard a valid entry if it's not for this port, even if
* only to revisit it later. This is inefficient because the
* 1024-sized FDB table needs to be traversed 4 times through
* SPI during a 'bridge fdb show' command.
*/
if (!(l2_lookup.destports & BIT(port)))
continue;
u64_to_ether_addr(l2_lookup.macaddr, macaddr);
cb(macaddr, l2_lookup.vlanid, false, data);
}
return 0;
}
/* This callback needs to be present */
static int sja1105_mdb_prepare(struct dsa_switch *ds, int port,
const struct switchdev_obj_port_mdb *mdb)
{
return 0;
}
static void sja1105_mdb_add(struct dsa_switch *ds, int port,
const struct switchdev_obj_port_mdb *mdb)
{
sja1105_fdb_add(ds, port, mdb->addr, mdb->vid);
}
static int sja1105_mdb_del(struct dsa_switch *ds, int port,
const struct switchdev_obj_port_mdb *mdb)
{
return sja1105_fdb_del(ds, port, mdb->addr, mdb->vid);
}
static int sja1105_bridge_member(struct dsa_switch *ds, int port,
struct net_device *br, bool member)
{
struct sja1105_l2_forwarding_entry *l2_fwd;
struct sja1105_private *priv = ds->priv;
int i, rc;
l2_fwd = priv->static_config.tables[BLK_IDX_L2_FORWARDING].entries;
for (i = 0; i < SJA1105_NUM_PORTS; i++) {
/* Add this port to the forwarding matrix of the
* other ports in the same bridge, and viceversa.
*/
if (!dsa_is_user_port(ds, i))
continue;
/* For the ports already under the bridge, only one thing needs
* to be done, and that is to add this port to their
* reachability domain. So we can perform the SPI write for
* them immediately. However, for this port itself (the one
* that is new to the bridge), we need to add all other ports
* to its reachability domain. So we do that incrementally in
* this loop, and perform the SPI write only at the end, once
* the domain contains all other bridge ports.
*/
if (i == port)
continue;
if (dsa_to_port(ds, i)->bridge_dev != br)
continue;
sja1105_port_allow_traffic(l2_fwd, i, port, member);
sja1105_port_allow_traffic(l2_fwd, port, i, member);
rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_FORWARDING,
i, &l2_fwd[i], true);
if (rc < 0)
return rc;
}
return sja1105_dynamic_config_write(priv, BLK_IDX_L2_FORWARDING,
port, &l2_fwd[port], true);
}
static int sja1105_bridge_join(struct dsa_switch *ds, int port,
struct net_device *br)
{
return sja1105_bridge_member(ds, port, br, true);
}
static void sja1105_bridge_leave(struct dsa_switch *ds, int port,
struct net_device *br)
{
sja1105_bridge_member(ds, port, br, false);
}
/* For situations where we need to change a setting at runtime that is only
* available through the static configuration, resetting the switch in order
* to upload the new static config is unavoidable. Back up the settings we
* modify at runtime (currently only MAC) and restore them after uploading,
* such that this operation is relatively seamless.
*/
static int sja1105_static_config_reload(struct sja1105_private *priv)
{
struct sja1105_mac_config_entry *mac;
int speed_mbps[SJA1105_NUM_PORTS];
int rc, i;
mac = priv->static_config.tables[BLK_IDX_MAC_CONFIG].entries;
/* Back up settings changed by sja1105_adjust_port_config and
* and restore their defaults.
*/
for (i = 0; i < SJA1105_NUM_PORTS; i++) {
speed_mbps[i] = sja1105_speed[mac[i].speed];
mac[i].speed = SJA1105_SPEED_AUTO;
}
/* Reset switch and send updated static configuration */
rc = sja1105_static_config_upload(priv);
if (rc < 0)
goto out;
/* Configure the CGU (PLLs) for MII and RMII PHYs.
* For these interfaces there is no dynamic configuration
* needed, since PLLs have same settings at all speeds.
*/
rc = sja1105_clocking_setup(priv);
if (rc < 0)
goto out;
for (i = 0; i < SJA1105_NUM_PORTS; i++) {
bool enabled = (speed_mbps[i] != 0);
rc = sja1105_adjust_port_config(priv, i, speed_mbps[i],
enabled);
if (rc < 0)
goto out;
}
out:
return rc;
}
/* The TPID setting belongs to the General Parameters table,
* which can only be partially reconfigured at runtime (and not the TPID).
* So a switch reset is required.
*/
static int sja1105_change_tpid(struct sja1105_private *priv,
u16 tpid, u16 tpid2)
{
struct sja1105_general_params_entry *general_params;
struct sja1105_table *table;
table = &priv->static_config.tables[BLK_IDX_GENERAL_PARAMS];
general_params = table->entries;
general_params->tpid = tpid;
general_params->tpid2 = tpid2;
return sja1105_static_config_reload(priv);
}
static int sja1105_pvid_apply(struct sja1105_private *priv, int port, u16 pvid)
{
struct sja1105_mac_config_entry *mac;
mac = priv->static_config.tables[BLK_IDX_MAC_CONFIG].entries;
mac[port].vlanid = pvid;
return sja1105_dynamic_config_write(priv, BLK_IDX_MAC_CONFIG, port,
&mac[port], true);
}
static int sja1105_is_vlan_configured(struct sja1105_private *priv, u16 vid)
{
struct sja1105_vlan_lookup_entry *vlan;
int count, i;
vlan = priv->static_config.tables[BLK_IDX_VLAN_LOOKUP].entries;
count = priv->static_config.tables[BLK_IDX_VLAN_LOOKUP].entry_count;
for (i = 0; i < count; i++)
if (vlan[i].vlanid == vid)
return i;
/* Return an invalid entry index if not found */
return -1;
}
static int sja1105_vlan_apply(struct sja1105_private *priv, int port, u16 vid,
bool enabled, bool untagged)
{
struct sja1105_vlan_lookup_entry *vlan;
struct sja1105_table *table;
bool keep = true;
int match, rc;
table = &priv->static_config.tables[BLK_IDX_VLAN_LOOKUP];
match = sja1105_is_vlan_configured(priv, vid);
if (match < 0) {
/* Can't delete a missing entry. */
if (!enabled)
return 0;
rc = sja1105_table_resize(table, table->entry_count + 1);
if (rc)
return rc;
match = table->entry_count - 1;
}
/* Assign pointer after the resize (it's new memory) */
vlan = table->entries;
vlan[match].vlanid = vid;
if (enabled) {
vlan[match].vlan_bc |= BIT(port);
vlan[match].vmemb_port |= BIT(port);
} else {
vlan[match].vlan_bc &= ~BIT(port);
vlan[match].vmemb_port &= ~BIT(port);
}
/* Also unset tag_port if removing this VLAN was requested,
* just so we don't have a confusing bitmap (no practical purpose).
*/
if (untagged || !enabled)
vlan[match].tag_port &= ~BIT(port);
else
vlan[match].tag_port |= BIT(port);
/* If there's no port left as member of this VLAN,
* it's time for it to go.
*/
if (!vlan[match].vmemb_port)
keep = false;
dev_dbg(priv->ds->dev,
"%s: port %d, vid %llu, broadcast domain 0x%llx, "
"port members 0x%llx, tagged ports 0x%llx, keep %d\n",
__func__, port, vlan[match].vlanid, vlan[match].vlan_bc,
vlan[match].vmemb_port, vlan[match].tag_port, keep);
rc = sja1105_dynamic_config_write(priv, BLK_IDX_VLAN_LOOKUP, vid,
&vlan[match], keep);
if (rc < 0)
return rc;
if (!keep)
return sja1105_table_delete_entry(table, match);
return 0;
}
static enum dsa_tag_protocol
sja1105_get_tag_protocol(struct dsa_switch *ds, int port)
{
return DSA_TAG_PROTO_NONE;
}
/* This callback needs to be present */
static int sja1105_vlan_prepare(struct dsa_switch *ds, int port,
const struct switchdev_obj_port_vlan *vlan)
{
return 0;
}
static int sja1105_vlan_filtering(struct dsa_switch *ds, int port, bool enabled)
{
struct sja1105_private *priv = ds->priv;
int rc;
if (enabled)
/* Enable VLAN filtering. */
rc = sja1105_change_tpid(priv, ETH_P_8021Q, ETH_P_8021AD);
else
/* Disable VLAN filtering. */
rc = sja1105_change_tpid(priv, ETH_P_SJA1105, ETH_P_SJA1105);
if (rc)
dev_err(ds->dev, "Failed to change VLAN Ethertype\n");
return rc;
}
static void sja1105_vlan_add(struct dsa_switch *ds, int port,
const struct switchdev_obj_port_vlan *vlan)
{
struct sja1105_private *priv = ds->priv;
u16 vid;
int rc;
for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) {
rc = sja1105_vlan_apply(priv, port, vid, true, vlan->flags &
BRIDGE_VLAN_INFO_UNTAGGED);
if (rc < 0) {
dev_err(ds->dev, "Failed to add VLAN %d to port %d: %d\n",
vid, port, rc);
return;
}
if (vlan->flags & BRIDGE_VLAN_INFO_PVID) {
rc = sja1105_pvid_apply(ds->priv, port, vid);
if (rc < 0) {
dev_err(ds->dev, "Failed to set pvid %d on port %d: %d\n",
vid, port, rc);
return;
}
}
}
}
static int sja1105_vlan_del(struct dsa_switch *ds, int port,
const struct switchdev_obj_port_vlan *vlan)
{
struct sja1105_private *priv = ds->priv;
u16 vid;
int rc;
for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) {
rc = sja1105_vlan_apply(priv, port, vid, false, vlan->flags &
BRIDGE_VLAN_INFO_UNTAGGED);
if (rc < 0) {
dev_err(ds->dev, "Failed to remove VLAN %d from port %d: %d\n",
vid, port, rc);
return rc;
}
}
return 0;
}
/* The programming model for the SJA1105 switch is "all-at-once" via static
* configuration tables. Some of these can be dynamically modified at runtime,
* but not the xMII mode parameters table.
* Furthermode, some PHYs may not have crystals for generating their clocks
* (e.g. RMII). Instead, their 50MHz clock is supplied via the SJA1105 port's
* ref_clk pin. So port clocking needs to be initialized early, before
* connecting to PHYs is attempted, otherwise they won't respond through MDIO.
* Setting correct PHY link speed does not matter now.
* But dsa_slave_phy_setup is called later than sja1105_setup, so the PHY
* bindings are not yet parsed by DSA core. We need to parse early so that we
* can populate the xMII mode parameters table.
*/
static int sja1105_setup(struct dsa_switch *ds)
{
struct sja1105_dt_port ports[SJA1105_NUM_PORTS];
struct sja1105_private *priv = ds->priv;
int rc;
rc = sja1105_parse_dt(priv, ports);
if (rc < 0) {
dev_err(ds->dev, "Failed to parse DT: %d\n", rc);
return rc;
}
/* Error out early if internal delays are required through DT
* and we can't apply them.
*/
rc = sja1105_parse_rgmii_delays(priv, ports);
if (rc < 0) {
dev_err(ds->dev, "RGMII delay not supported\n");
return rc;
}
/* Create and send configuration down to device */
rc = sja1105_static_config_load(priv, ports);
if (rc < 0) {
dev_err(ds->dev, "Failed to load static config: %d\n", rc);
return rc;
}
/* Configure the CGU (PHY link modes and speeds) */
rc = sja1105_clocking_setup(priv);
if (rc < 0) {
dev_err(ds->dev, "Failed to configure MII clocking: %d\n", rc);
return rc;
}
/* On SJA1105, VLAN filtering per se is always enabled in hardware.
* The only thing we can do to disable it is lie about what the 802.1Q
* EtherType is.
* So it will still try to apply VLAN filtering, but all ingress
* traffic (except frames received with EtherType of ETH_P_SJA1105)
* will be internally tagged with a distorted VLAN header where the
* TPID is ETH_P_SJA1105, and the VLAN ID is the port pvid.
*/
ds->vlan_filtering_is_global = true;
return 0;
}
/* The MAXAGE setting belongs to the L2 Forwarding Parameters table,
* which cannot be reconfigured at runtime. So a switch reset is required.
*/
static int sja1105_set_ageing_time(struct dsa_switch *ds,
unsigned int ageing_time)
{
struct sja1105_l2_lookup_params_entry *l2_lookup_params;
struct sja1105_private *priv = ds->priv;
struct sja1105_table *table;
unsigned int maxage;
table = &priv->static_config.tables[BLK_IDX_L2_LOOKUP_PARAMS];
l2_lookup_params = table->entries;
maxage = SJA1105_AGEING_TIME_MS(ageing_time);
if (l2_lookup_params->maxage == maxage)
return 0;
l2_lookup_params->maxage = maxage;
return sja1105_static_config_reload(priv);
}
static const struct dsa_switch_ops sja1105_switch_ops = {
.get_tag_protocol = sja1105_get_tag_protocol,
.setup = sja1105_setup,
.adjust_link = sja1105_adjust_link,
.set_ageing_time = sja1105_set_ageing_time,
.phylink_validate = sja1105_phylink_validate,
.get_strings = sja1105_get_strings,
.get_ethtool_stats = sja1105_get_ethtool_stats,
.get_sset_count = sja1105_get_sset_count,
.port_fdb_dump = sja1105_fdb_dump,
.port_fdb_add = sja1105_fdb_add,
.port_fdb_del = sja1105_fdb_del,
.port_bridge_join = sja1105_bridge_join,
.port_bridge_leave = sja1105_bridge_leave,
.port_vlan_prepare = sja1105_vlan_prepare,
.port_vlan_filtering = sja1105_vlan_filtering,
.port_vlan_add = sja1105_vlan_add,
.port_vlan_del = sja1105_vlan_del,
.port_mdb_prepare = sja1105_mdb_prepare,
.port_mdb_add = sja1105_mdb_add,
.port_mdb_del = sja1105_mdb_del,
};
static int sja1105_check_device_id(struct sja1105_private *priv)
{
const struct sja1105_regs *regs = priv->info->regs;
u8 prod_id[SJA1105_SIZE_DEVICE_ID] = {0};
struct device *dev = &priv->spidev->dev;
u64 device_id;
u64 part_no;
int rc;
rc = sja1105_spi_send_int(priv, SPI_READ, regs->device_id,
&device_id, SJA1105_SIZE_DEVICE_ID);
if (rc < 0)
return rc;
if (device_id != priv->info->device_id) {
dev_err(dev, "Expected device ID 0x%llx but read 0x%llx\n",
priv->info->device_id, device_id);
return -ENODEV;
}
rc = sja1105_spi_send_packed_buf(priv, SPI_READ, regs->prod_id,
prod_id, SJA1105_SIZE_DEVICE_ID);
if (rc < 0)
return rc;
sja1105_unpack(prod_id, &part_no, 19, 4, SJA1105_SIZE_DEVICE_ID);
if (part_no != priv->info->part_no) {
dev_err(dev, "Expected part number 0x%llx but read 0x%llx\n",
priv->info->part_no, part_no);
return -ENODEV;
}
return 0;
}
static int sja1105_probe(struct spi_device *spi)
{
struct device *dev = &spi->dev;
struct sja1105_private *priv;
struct dsa_switch *ds;
int rc;
if (!dev->of_node) {
dev_err(dev, "No DTS bindings for SJA1105 driver\n");
return -EINVAL;
}
priv = devm_kzalloc(dev, sizeof(struct sja1105_private), GFP_KERNEL);
if (!priv)
return -ENOMEM;
/* Configure the optional reset pin and bring up switch */
priv->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH);
if (IS_ERR(priv->reset_gpio))
dev_dbg(dev, "reset-gpios not defined, ignoring\n");
else
sja1105_hw_reset(priv->reset_gpio, 1, 1);
/* Populate our driver private structure (priv) based on
* the device tree node that was probed (spi)
*/
priv->spidev = spi;
spi_set_drvdata(spi, priv);
/* Configure the SPI bus */
spi->bits_per_word = 8;
rc = spi_setup(spi);
if (rc < 0) {
dev_err(dev, "Could not init SPI\n");
return rc;
}
priv->info = of_device_get_match_data(dev);
/* Detect hardware device */
rc = sja1105_check_device_id(priv);
if (rc < 0) {
dev_err(dev, "Device ID check failed: %d\n", rc);
return rc;
}
dev_info(dev, "Probed switch chip: %s\n", priv->info->name);
ds = dsa_switch_alloc(dev, SJA1105_NUM_PORTS);
if (!ds)
return -ENOMEM;
ds->ops = &sja1105_switch_ops;
ds->priv = priv;
priv->ds = ds;
return dsa_register_switch(priv->ds);
}
static int sja1105_remove(struct spi_device *spi)
{
struct sja1105_private *priv = spi_get_drvdata(spi);
dsa_unregister_switch(priv->ds);
sja1105_static_config_free(&priv->static_config);
return 0;
}
static const struct of_device_id sja1105_dt_ids[] = {
{ .compatible = "nxp,sja1105e", .data = &sja1105e_info },
{ .compatible = "nxp,sja1105t", .data = &sja1105t_info },
{ .compatible = "nxp,sja1105p", .data = &sja1105p_info },
{ .compatible = "nxp,sja1105q", .data = &sja1105q_info },
{ .compatible = "nxp,sja1105r", .data = &sja1105r_info },
{ .compatible = "nxp,sja1105s", .data = &sja1105s_info },
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(of, sja1105_dt_ids);
static struct spi_driver sja1105_driver = {
.driver = {
.name = "sja1105",
.owner = THIS_MODULE,
.of_match_table = of_match_ptr(sja1105_dt_ids),
},
.probe = sja1105_probe,
.remove = sja1105_remove,
};
module_spi_driver(sja1105_driver);
MODULE_AUTHOR("Vladimir Oltean <olteanv@gmail.com>");
MODULE_AUTHOR("Georg Waibel <georg.waibel@sensor-technik.de>");
MODULE_DESCRIPTION("SJA1105 Driver");
MODULE_LICENSE("GPL v2");
// SPDX-License-Identifier: BSD-3-Clause
/* Copyright (c) 2016-2018, NXP Semiconductors
* Copyright (c) 2018, Sensor-Technik Wiedemann GmbH
* Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
*/
#include <linux/spi/spi.h>
#include <linux/packing.h>
#include "sja1105.h"
#define SJA1105_SIZE_PORT_CTRL 4
#define SJA1105_SIZE_RESET_CMD 4
#define SJA1105_SIZE_SPI_MSG_HEADER 4
#define SJA1105_SIZE_SPI_MSG_MAXLEN (64 * 4)
#define SJA1105_SIZE_SPI_TRANSFER_MAX \
(SJA1105_SIZE_SPI_MSG_HEADER + SJA1105_SIZE_SPI_MSG_MAXLEN)
static int sja1105_spi_transfer(const struct sja1105_private *priv,
const void *tx, void *rx, int size)
{
struct spi_device *spi = priv->spidev;
struct spi_transfer transfer = {
.tx_buf = tx,
.rx_buf = rx,
.len = size,
};
struct spi_message msg;
int rc;
if (size > SJA1105_SIZE_SPI_TRANSFER_MAX) {
dev_err(&spi->dev, "SPI message (%d) longer than max of %d\n",
size, SJA1105_SIZE_SPI_TRANSFER_MAX);
return -EMSGSIZE;
}
spi_message_init(&msg);
spi_message_add_tail(&transfer, &msg);
rc = spi_sync(spi, &msg);
if (rc < 0) {
dev_err(&spi->dev, "SPI transfer failed: %d\n", rc);
return rc;
}
return rc;
}
static void
sja1105_spi_message_pack(void *buf, const struct sja1105_spi_message *msg)
{
const int size = SJA1105_SIZE_SPI_MSG_HEADER;
memset(buf, 0, size);
sja1105_pack(buf, &msg->access, 31, 31, size);
sja1105_pack(buf, &msg->read_count, 30, 25, size);
sja1105_pack(buf, &msg->address, 24, 4, size);
}
/* If @rw is:
* - SPI_WRITE: creates and sends an SPI write message at absolute
* address reg_addr, taking size_bytes from *packed_buf
* - SPI_READ: creates and sends an SPI read message from absolute
* address reg_addr, writing size_bytes into *packed_buf
*
* This function should only be called if it is priorly known that
* @size_bytes is smaller than SIZE_SPI_MSG_MAXLEN. Larger packed buffers
* are chunked in smaller pieces by sja1105_spi_send_long_packed_buf below.
*/
int sja1105_spi_send_packed_buf(const struct sja1105_private *priv,
sja1105_spi_rw_mode_t rw, u64 reg_addr,
void *packed_buf, size_t size_bytes)
{
u8 tx_buf[SJA1105_SIZE_SPI_TRANSFER_MAX] = {0};
u8 rx_buf[SJA1105_SIZE_SPI_TRANSFER_MAX] = {0};
const int msg_len = size_bytes + SJA1105_SIZE_SPI_MSG_HEADER;
struct sja1105_spi_message msg = {0};
int rc;
if (msg_len > SJA1105_SIZE_SPI_TRANSFER_MAX)
return -ERANGE;
msg.access = rw;
msg.address = reg_addr;
if (rw == SPI_READ)
msg.read_count = size_bytes / 4;
sja1105_spi_message_pack(tx_buf, &msg);
if (rw == SPI_WRITE)
memcpy(tx_buf + SJA1105_SIZE_SPI_MSG_HEADER,
packed_buf, size_bytes);
rc = sja1105_spi_transfer(priv, tx_buf, rx_buf, msg_len);
if (rc < 0)
return rc;
if (rw == SPI_READ)
memcpy(packed_buf, rx_buf + SJA1105_SIZE_SPI_MSG_HEADER,
size_bytes);
return 0;
}
/* If @rw is:
* - SPI_WRITE: creates and sends an SPI write message at absolute
* address reg_addr, taking size_bytes from *packed_buf
* - SPI_READ: creates and sends an SPI read message from absolute
* address reg_addr, writing size_bytes into *packed_buf
*
* The u64 *value is unpacked, meaning that it's stored in the native
* CPU endianness and directly usable by software running on the core.
*
* This is a wrapper around sja1105_spi_send_packed_buf().
*/
int sja1105_spi_send_int(const struct sja1105_private *priv,
sja1105_spi_rw_mode_t rw, u64 reg_addr,
u64 *value, u64 size_bytes)
{
u8 packed_buf[SJA1105_SIZE_SPI_MSG_MAXLEN];
int rc;
if (size_bytes > SJA1105_SIZE_SPI_MSG_MAXLEN)
return -ERANGE;
if (rw == SPI_WRITE)
sja1105_pack(packed_buf, value, 8 * size_bytes - 1, 0,
size_bytes);
rc = sja1105_spi_send_packed_buf(priv, rw, reg_addr, packed_buf,
size_bytes);
if (rw == SPI_READ)
sja1105_unpack(packed_buf, value, 8 * size_bytes - 1, 0,
size_bytes);
return rc;
}
/* Should be used if a @packed_buf larger than SJA1105_SIZE_SPI_MSG_MAXLEN
* must be sent/received. Splitting the buffer into chunks and assembling
* those into SPI messages is done automatically by this function.
*/
int sja1105_spi_send_long_packed_buf(const struct sja1105_private *priv,
sja1105_spi_rw_mode_t rw, u64 base_addr,
void *packed_buf, u64 buf_len)
{
struct chunk {
void *buf_ptr;
int len;
u64 spi_address;
} chunk;
int distance_to_end;
int rc;
/* Initialize chunk */
chunk.buf_ptr = packed_buf;
chunk.spi_address = base_addr;
chunk.len = min_t(int, buf_len, SJA1105_SIZE_SPI_MSG_MAXLEN);
while (chunk.len) {
rc = sja1105_spi_send_packed_buf(priv, rw, chunk.spi_address,
chunk.buf_ptr, chunk.len);
if (rc < 0)
return rc;
chunk.buf_ptr += chunk.len;
chunk.spi_address += chunk.len / 4;
distance_to_end = (uintptr_t)(packed_buf + buf_len -
chunk.buf_ptr);
chunk.len = min(distance_to_end, SJA1105_SIZE_SPI_MSG_MAXLEN);
}
return 0;
}
/* Back-ported structure from UM11040 Table 112.
* Reset control register (addr. 100440h)
* In the SJA1105 E/T, only warm_rst and cold_rst are
* supported (exposed in UM10944 as rst_ctrl), but the bit
* offsets of warm_rst and cold_rst are actually reversed.
*/
struct sja1105_reset_cmd {
u64 switch_rst;
u64 cfg_rst;
u64 car_rst;
u64 otp_rst;
u64 warm_rst;
u64 cold_rst;
u64 por_rst;
};
static void
sja1105et_reset_cmd_pack(void *buf, const struct sja1105_reset_cmd *reset)
{
const int size = SJA1105_SIZE_RESET_CMD;
memset(buf, 0, size);
sja1105_pack(buf, &reset->cold_rst, 3, 3, size);
sja1105_pack(buf, &reset->warm_rst, 2, 2, size);
}
static void
sja1105pqrs_reset_cmd_pack(void *buf, const struct sja1105_reset_cmd *reset)
{
const int size = SJA1105_SIZE_RESET_CMD;
memset(buf, 0, size);
sja1105_pack(buf, &reset->switch_rst, 8, 8, size);
sja1105_pack(buf, &reset->cfg_rst, 7, 7, size);
sja1105_pack(buf, &reset->car_rst, 5, 5, size);
sja1105_pack(buf, &reset->otp_rst, 4, 4, size);
sja1105_pack(buf, &reset->warm_rst, 3, 3, size);
sja1105_pack(buf, &reset->cold_rst, 2, 2, size);
sja1105_pack(buf, &reset->por_rst, 1, 1, size);
}
static int sja1105et_reset_cmd(const void *ctx, const void *data)
{
const struct sja1105_private *priv = ctx;
const struct sja1105_reset_cmd *reset = data;
const struct sja1105_regs *regs = priv->info->regs;
struct device *dev = priv->ds->dev;
u8 packed_buf[SJA1105_SIZE_RESET_CMD];
if (reset->switch_rst ||
reset->cfg_rst ||
reset->car_rst ||
reset->otp_rst ||
reset->por_rst) {
dev_err(dev, "Only warm and cold reset is supported "
"for SJA1105 E/T!\n");
return -EINVAL;
}
if (reset->warm_rst)
dev_dbg(dev, "Warm reset requested\n");
if (reset->cold_rst)
dev_dbg(dev, "Cold reset requested\n");
sja1105et_reset_cmd_pack(packed_buf, reset);
return sja1105_spi_send_packed_buf(priv, SPI_WRITE, regs->rgu,
packed_buf, SJA1105_SIZE_RESET_CMD);
}
static int sja1105pqrs_reset_cmd(const void *ctx, const void *data)
{
const struct sja1105_private *priv = ctx;
const struct sja1105_reset_cmd *reset = data;
const struct sja1105_regs *regs = priv->info->regs;
struct device *dev = priv->ds->dev;
u8 packed_buf[SJA1105_SIZE_RESET_CMD];
if (reset->switch_rst)
dev_dbg(dev, "Main reset for all functional modules requested\n");
if (reset->cfg_rst)
dev_dbg(dev, "Chip configuration reset requested\n");
if (reset->car_rst)
dev_dbg(dev, "Clock and reset control logic reset requested\n");
if (reset->otp_rst)
dev_dbg(dev, "OTP read cycle for reading product "
"config settings requested\n");
if (reset->warm_rst)
dev_dbg(dev, "Warm reset requested\n");
if (reset->cold_rst)
dev_dbg(dev, "Cold reset requested\n");
if (reset->por_rst)
dev_dbg(dev, "Power-on reset requested\n");
sja1105pqrs_reset_cmd_pack(packed_buf, reset);
return sja1105_spi_send_packed_buf(priv, SPI_WRITE, regs->rgu,
packed_buf, SJA1105_SIZE_RESET_CMD);
}
static int sja1105_cold_reset(const struct sja1105_private *priv)
{
struct sja1105_reset_cmd reset = {0};
reset.cold_rst = 1;
return priv->info->reset_cmd(priv, &reset);
}
static int sja1105_inhibit_tx(const struct sja1105_private *priv,
const unsigned long *port_bitmap)
{
const struct sja1105_regs *regs = priv->info->regs;
u64 inhibit_cmd;
int port, rc;
rc = sja1105_spi_send_int(priv, SPI_READ, regs->port_control,
&inhibit_cmd, SJA1105_SIZE_PORT_CTRL);
if (rc < 0)
return rc;
for_each_set_bit(port, port_bitmap, SJA1105_NUM_PORTS)
inhibit_cmd |= BIT(port);
return sja1105_spi_send_int(priv, SPI_WRITE, regs->port_control,
&inhibit_cmd, SJA1105_SIZE_PORT_CTRL);
}
struct sja1105_status {
u64 configs;
u64 crcchkl;
u64 ids;
u64 crcchkg;
};
/* This is not reading the entire General Status area, which is also
* divergent between E/T and P/Q/R/S, but only the relevant bits for
* ensuring that the static config upload procedure was successful.
*/
static void sja1105_status_unpack(void *buf, struct sja1105_status *status)
{
/* So that addition translates to 4 bytes */
u32 *p = buf;
/* device_id is missing from the buffer, but we don't
* want to diverge from the manual definition of the
* register addresses, so we'll back off one step with
* the register pointer, and never access p[0].
*/
p--;
sja1105_unpack(p + 0x1, &status->configs, 31, 31, 4);
sja1105_unpack(p + 0x1, &status->crcchkl, 30, 30, 4);
sja1105_unpack(p + 0x1, &status->ids, 29, 29, 4);
sja1105_unpack(p + 0x1, &status->crcchkg, 28, 28, 4);
}
static int sja1105_status_get(struct sja1105_private *priv,
struct sja1105_status *status)
{
const struct sja1105_regs *regs = priv->info->regs;
u8 packed_buf[4];
int rc;
rc = sja1105_spi_send_packed_buf(priv, SPI_READ,
regs->status,
packed_buf, 4);
if (rc < 0)
return rc;
sja1105_status_unpack(packed_buf, status);
return 0;
}
/* Not const because unpacking priv->static_config into buffers and preparing
* for upload requires the recalculation of table CRCs and updating the
* structures with these.
*/
static int
static_config_buf_prepare_for_upload(struct sja1105_private *priv,
void *config_buf, int buf_len)
{
struct sja1105_static_config *config = &priv->static_config;
struct sja1105_table_header final_header;
sja1105_config_valid_t valid;
char *final_header_ptr;
int crc_len;
valid = sja1105_static_config_check_valid(config);
if (valid != SJA1105_CONFIG_OK) {
dev_err(&priv->spidev->dev,
sja1105_static_config_error_msg[valid]);
return -EINVAL;
}
/* Write Device ID and config tables to config_buf */
sja1105_static_config_pack(config_buf, config);
/* Recalculate CRC of the last header (right now 0xDEADBEEF).
* Don't include the CRC field itself.
*/
crc_len = buf_len - 4;
/* Read the whole table header */
final_header_ptr = config_buf + buf_len - SJA1105_SIZE_TABLE_HEADER;
sja1105_table_header_packing(final_header_ptr, &final_header, UNPACK);
/* Modify */
final_header.crc = sja1105_crc32(config_buf, crc_len);
/* Rewrite */
sja1105_table_header_packing(final_header_ptr, &final_header, PACK);
return 0;
}
#define RETRIES 10
int sja1105_static_config_upload(struct sja1105_private *priv)
{
unsigned long port_bitmap = GENMASK_ULL(SJA1105_NUM_PORTS - 1, 0);
struct sja1105_static_config *config = &priv->static_config;
const struct sja1105_regs *regs = priv->info->regs;
struct device *dev = &priv->spidev->dev;
struct sja1105_status status;
int rc, retries = RETRIES;
u8 *config_buf;
int buf_len;
buf_len = sja1105_static_config_get_length(config);
config_buf = kcalloc(buf_len, sizeof(char), GFP_KERNEL);
if (!config_buf)
return -ENOMEM;
rc = static_config_buf_prepare_for_upload(priv, config_buf, buf_len);
if (rc < 0) {
dev_err(dev, "Invalid config, cannot upload\n");
return -EINVAL;
}
/* Prevent PHY jabbering during switch reset by inhibiting
* Tx on all ports and waiting for current packet to drain.
* Otherwise, the PHY will see an unterminated Ethernet packet.
*/
rc = sja1105_inhibit_tx(priv, &port_bitmap);
if (rc < 0) {
dev_err(dev, "Failed to inhibit Tx on ports\n");
return -ENXIO;
}
/* Wait for an eventual egress packet to finish transmission
* (reach IFG). It is guaranteed that a second one will not
* follow, and that switch cold reset is thus safe
*/
usleep_range(500, 1000);
do {
/* Put the SJA1105 in programming mode */
rc = sja1105_cold_reset(priv);
if (rc < 0) {
dev_err(dev, "Failed to reset switch, retrying...\n");
continue;
}
/* Wait for the switch to come out of reset */
usleep_range(1000, 5000);
/* Upload the static config to the device */
rc = sja1105_spi_send_long_packed_buf(priv, SPI_WRITE,
regs->config,
config_buf, buf_len);
if (rc < 0) {
dev_err(dev, "Failed to upload config, retrying...\n");
continue;
}
/* Check that SJA1105 responded well to the config upload */
rc = sja1105_status_get(priv, &status);
if (rc < 0)
continue;
if (status.ids == 1) {
dev_err(dev, "Mismatch between hardware and static config "
"device id. Wrote 0x%llx, wants 0x%llx\n",
config->device_id, priv->info->device_id);
continue;
}
if (status.crcchkl == 1) {
dev_err(dev, "Switch reported invalid local CRC on "
"the uploaded config, retrying...\n");
continue;
}
if (status.crcchkg == 1) {
dev_err(dev, "Switch reported invalid global CRC on "
"the uploaded config, retrying...\n");
continue;
}
if (status.configs == 0) {
dev_err(dev, "Switch reported that configuration is "
"invalid, retrying...\n");
continue;
}
} while (--retries && (status.crcchkl == 1 || status.crcchkg == 1 ||
status.configs == 0 || status.ids == 1));
if (!retries) {
rc = -EIO;
dev_err(dev, "Failed to upload config to device, giving up\n");
goto out;
} else if (retries != RETRIES - 1) {
dev_info(dev, "Succeeded after %d tried\n", RETRIES - retries);
}
dev_info(dev, "Reset switch and programmed static config\n");
out:
kfree(config_buf);
return rc;
}
struct sja1105_regs sja1105et_regs = {
.device_id = 0x0,
.prod_id = 0x100BC3,
.status = 0x1,
.port_control = 0x11,
.config = 0x020000,
.rgu = 0x100440,
.pad_mii_tx = {0x100800, 0x100802, 0x100804, 0x100806, 0x100808},
.rmii_pll1 = 0x10000A,
.cgu_idiv = {0x10000B, 0x10000C, 0x10000D, 0x10000E, 0x10000F},
/* UM10944.pdf, Table 86, ACU Register overview */
.rgmii_pad_mii_tx = {0x100800, 0x100802, 0x100804, 0x100806, 0x100808},
.mac = {0x200, 0x202, 0x204, 0x206, 0x208},
.mac_hl1 = {0x400, 0x410, 0x420, 0x430, 0x440},
.mac_hl2 = {0x600, 0x610, 0x620, 0x630, 0x640},
/* UM10944.pdf, Table 78, CGU Register overview */
.mii_tx_clk = {0x100013, 0x10001A, 0x100021, 0x100028, 0x10002F},
.mii_rx_clk = {0x100014, 0x10001B, 0x100022, 0x100029, 0x100030},
.mii_ext_tx_clk = {0x100018, 0x10001F, 0x100026, 0x10002D, 0x100034},
.mii_ext_rx_clk = {0x100019, 0x100020, 0x100027, 0x10002E, 0x100035},
.rgmii_tx_clk = {0x100016, 0x10001D, 0x100024, 0x10002B, 0x100032},
.rmii_ref_clk = {0x100015, 0x10001C, 0x100023, 0x10002A, 0x100031},
.rmii_ext_tx_clk = {0x100018, 0x10001F, 0x100026, 0x10002D, 0x100034},
};
struct sja1105_regs sja1105pqrs_regs = {
.device_id = 0x0,
.prod_id = 0x100BC3,
.status = 0x1,
.port_control = 0x12,
.config = 0x020000,
.rgu = 0x100440,
.pad_mii_tx = {0x100800, 0x100802, 0x100804, 0x100806, 0x100808},
.rmii_pll1 = 0x10000A,
.cgu_idiv = {0x10000B, 0x10000C, 0x10000D, 0x10000E, 0x10000F},
/* UM10944.pdf, Table 86, ACU Register overview */
.rgmii_pad_mii_tx = {0x100800, 0x100802, 0x100804, 0x100806, 0x100808},
.mac = {0x200, 0x202, 0x204, 0x206, 0x208},
.mac_hl1 = {0x400, 0x410, 0x420, 0x430, 0x440},
.mac_hl2 = {0x600, 0x610, 0x620, 0x630, 0x640},
/* UM11040.pdf, Table 114 */
.mii_tx_clk = {0x100013, 0x100019, 0x10001F, 0x100025, 0x10002B},
.mii_rx_clk = {0x100014, 0x10001A, 0x100020, 0x100026, 0x10002C},
.mii_ext_tx_clk = {0x100017, 0x10001D, 0x100023, 0x100029, 0x10002F},
.mii_ext_rx_clk = {0x100018, 0x10001E, 0x100024, 0x10002A, 0x100030},
.rgmii_tx_clk = {0x100016, 0x10001C, 0x100022, 0x100028, 0x10002E},
.rmii_ref_clk = {0x100015, 0x10001B, 0x100021, 0x100027, 0x10002D},
.rmii_ext_tx_clk = {0x100017, 0x10001D, 0x100023, 0x100029, 0x10002F},
.qlevel = {0x604, 0x614, 0x624, 0x634, 0x644},
};
struct sja1105_info sja1105e_info = {
.device_id = SJA1105E_DEVICE_ID,
.part_no = SJA1105ET_PART_NO,
.static_ops = sja1105e_table_ops,
.dyn_ops = sja1105et_dyn_ops,
.reset_cmd = sja1105et_reset_cmd,
.regs = &sja1105et_regs,
.name = "SJA1105E",
};
struct sja1105_info sja1105t_info = {
.device_id = SJA1105T_DEVICE_ID,
.part_no = SJA1105ET_PART_NO,
.static_ops = sja1105t_table_ops,
.dyn_ops = sja1105et_dyn_ops,
.reset_cmd = sja1105et_reset_cmd,
.regs = &sja1105et_regs,
.name = "SJA1105T",
};
struct sja1105_info sja1105p_info = {
.device_id = SJA1105PR_DEVICE_ID,
.part_no = SJA1105P_PART_NO,
.static_ops = sja1105p_table_ops,
.dyn_ops = sja1105pqrs_dyn_ops,
.reset_cmd = sja1105pqrs_reset_cmd,
.regs = &sja1105pqrs_regs,
.name = "SJA1105P",
};
struct sja1105_info sja1105q_info = {
.device_id = SJA1105QS_DEVICE_ID,
.part_no = SJA1105Q_PART_NO,
.static_ops = sja1105q_table_ops,
.dyn_ops = sja1105pqrs_dyn_ops,
.reset_cmd = sja1105pqrs_reset_cmd,
.regs = &sja1105pqrs_regs,
.name = "SJA1105Q",
};
struct sja1105_info sja1105r_info = {
.device_id = SJA1105PR_DEVICE_ID,
.part_no = SJA1105R_PART_NO,
.static_ops = sja1105r_table_ops,
.dyn_ops = sja1105pqrs_dyn_ops,
.reset_cmd = sja1105pqrs_reset_cmd,
.regs = &sja1105pqrs_regs,
.name = "SJA1105R",
};
struct sja1105_info sja1105s_info = {
.device_id = SJA1105QS_DEVICE_ID,
.part_no = SJA1105S_PART_NO,
.static_ops = sja1105s_table_ops,
.dyn_ops = sja1105pqrs_dyn_ops,
.regs = &sja1105pqrs_regs,
.reset_cmd = sja1105pqrs_reset_cmd,
.name = "SJA1105S",
};
// SPDX-License-Identifier: BSD-3-Clause
/* Copyright (c) 2016-2018, NXP Semiconductors
* Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
*/
#include "sja1105_static_config.h"
#include <linux/crc32.h>
#include <linux/slab.h>
#include <linux/string.h>
#include <linux/errno.h>
/* Convenience wrappers over the generic packing functions. These take into
* account the SJA1105 memory layout quirks and provide some level of
* programmer protection against incorrect API use. The errors are not expected
* to occur durring runtime, therefore printing and swallowing them here is
* appropriate instead of clutterring up higher-level code.
*/
void sja1105_pack(void *buf, const u64 *val, int start, int end, size_t len)
{
int rc = packing(buf, (u64 *)val, start, end, len,
PACK, QUIRK_LSW32_IS_FIRST);
if (likely(!rc))
return;
if (rc == -EINVAL) {
pr_err("Start bit (%d) expected to be larger than end (%d)\n",
start, end);
} else if (rc == -ERANGE) {
if ((start - end + 1) > 64)
pr_err("Field %d-%d too large for 64 bits!\n",
start, end);
else
pr_err("Cannot store %llx inside bits %d-%d (would truncate)\n",
*val, start, end);
}
dump_stack();
}
void sja1105_unpack(const void *buf, u64 *val, int start, int end, size_t len)
{
int rc = packing((void *)buf, val, start, end, len,
UNPACK, QUIRK_LSW32_IS_FIRST);
if (likely(!rc))
return;
if (rc == -EINVAL)
pr_err("Start bit (%d) expected to be larger than end (%d)\n",
start, end);
else if (rc == -ERANGE)
pr_err("Field %d-%d too large for 64 bits!\n",
start, end);
dump_stack();
}
void sja1105_packing(void *buf, u64 *val, int start, int end,
size_t len, enum packing_op op)
{
int rc = packing(buf, val, start, end, len, op, QUIRK_LSW32_IS_FIRST);
if (likely(!rc))
return;
if (rc == -EINVAL) {
pr_err("Start bit (%d) expected to be larger than end (%d)\n",
start, end);
} else if (rc == -ERANGE) {
if ((start - end + 1) > 64)
pr_err("Field %d-%d too large for 64 bits!\n",
start, end);
else
pr_err("Cannot store %llx inside bits %d-%d (would truncate)\n",
*val, start, end);
}
dump_stack();
}
/* Little-endian Ethernet CRC32 of data packed as big-endian u32 words */
u32 sja1105_crc32(const void *buf, size_t len)
{
unsigned int i;
u64 word;
u32 crc;
/* seed */
crc = ~0;
for (i = 0; i < len; i += 4) {
sja1105_unpack((void *)buf + i, &word, 31, 0, 4);
crc = crc32_le(crc, (u8 *)&word, 4);
}
return ~crc;
}
static size_t sja1105et_general_params_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
const size_t size = SJA1105ET_SIZE_GENERAL_PARAMS_ENTRY;
struct sja1105_general_params_entry *entry = entry_ptr;
sja1105_packing(buf, &entry->vllupformat, 319, 319, size, op);
sja1105_packing(buf, &entry->mirr_ptacu, 318, 318, size, op);
sja1105_packing(buf, &entry->switchid, 317, 315, size, op);
sja1105_packing(buf, &entry->hostprio, 314, 312, size, op);
sja1105_packing(buf, &entry->mac_fltres1, 311, 264, size, op);
sja1105_packing(buf, &entry->mac_fltres0, 263, 216, size, op);
sja1105_packing(buf, &entry->mac_flt1, 215, 168, size, op);
sja1105_packing(buf, &entry->mac_flt0, 167, 120, size, op);
sja1105_packing(buf, &entry->incl_srcpt1, 119, 119, size, op);
sja1105_packing(buf, &entry->incl_srcpt0, 118, 118, size, op);
sja1105_packing(buf, &entry->send_meta1, 117, 117, size, op);
sja1105_packing(buf, &entry->send_meta0, 116, 116, size, op);
sja1105_packing(buf, &entry->casc_port, 115, 113, size, op);
sja1105_packing(buf, &entry->host_port, 112, 110, size, op);
sja1105_packing(buf, &entry->mirr_port, 109, 107, size, op);
sja1105_packing(buf, &entry->vlmarker, 106, 75, size, op);
sja1105_packing(buf, &entry->vlmask, 74, 43, size, op);
sja1105_packing(buf, &entry->tpid, 42, 27, size, op);
sja1105_packing(buf, &entry->ignore2stf, 26, 26, size, op);
sja1105_packing(buf, &entry->tpid2, 25, 10, size, op);
return size;
}
static size_t
sja1105pqrs_general_params_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
const size_t size = SJA1105PQRS_SIZE_GENERAL_PARAMS_ENTRY;
struct sja1105_general_params_entry *entry = entry_ptr;
sja1105_packing(buf, &entry->vllupformat, 351, 351, size, op);
sja1105_packing(buf, &entry->mirr_ptacu, 350, 350, size, op);
sja1105_packing(buf, &entry->switchid, 349, 347, size, op);
sja1105_packing(buf, &entry->hostprio, 346, 344, size, op);
sja1105_packing(buf, &entry->mac_fltres1, 343, 296, size, op);
sja1105_packing(buf, &entry->mac_fltres0, 295, 248, size, op);
sja1105_packing(buf, &entry->mac_flt1, 247, 200, size, op);
sja1105_packing(buf, &entry->mac_flt0, 199, 152, size, op);
sja1105_packing(buf, &entry->incl_srcpt1, 151, 151, size, op);
sja1105_packing(buf, &entry->incl_srcpt0, 150, 150, size, op);
sja1105_packing(buf, &entry->send_meta1, 149, 149, size, op);
sja1105_packing(buf, &entry->send_meta0, 148, 148, size, op);
sja1105_packing(buf, &entry->casc_port, 147, 145, size, op);
sja1105_packing(buf, &entry->host_port, 144, 142, size, op);
sja1105_packing(buf, &entry->mirr_port, 141, 139, size, op);
sja1105_packing(buf, &entry->vlmarker, 138, 107, size, op);
sja1105_packing(buf, &entry->vlmask, 106, 75, size, op);
sja1105_packing(buf, &entry->tpid, 74, 59, size, op);
sja1105_packing(buf, &entry->ignore2stf, 58, 58, size, op);
sja1105_packing(buf, &entry->tpid2, 57, 42, size, op);
sja1105_packing(buf, &entry->queue_ts, 41, 41, size, op);
sja1105_packing(buf, &entry->egrmirrvid, 40, 29, size, op);
sja1105_packing(buf, &entry->egrmirrpcp, 28, 26, size, op);
sja1105_packing(buf, &entry->egrmirrdei, 25, 25, size, op);
sja1105_packing(buf, &entry->replay_port, 24, 22, size, op);
return size;
}
static size_t
sja1105_l2_forwarding_params_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
const size_t size = SJA1105_SIZE_L2_FORWARDING_PARAMS_ENTRY;
struct sja1105_l2_forwarding_params_entry *entry = entry_ptr;
int offset, i;
sja1105_packing(buf, &entry->max_dynp, 95, 93, size, op);
for (i = 0, offset = 13; i < 8; i++, offset += 10)
sja1105_packing(buf, &entry->part_spc[i],
offset + 9, offset + 0, size, op);
return size;
}
size_t sja1105_l2_forwarding_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
const size_t size = SJA1105_SIZE_L2_FORWARDING_ENTRY;
struct sja1105_l2_forwarding_entry *entry = entry_ptr;
int offset, i;
sja1105_packing(buf, &entry->bc_domain, 63, 59, size, op);
sja1105_packing(buf, &entry->reach_port, 58, 54, size, op);
sja1105_packing(buf, &entry->fl_domain, 53, 49, size, op);
for (i = 0, offset = 25; i < 8; i++, offset += 3)
sja1105_packing(buf, &entry->vlan_pmap[i],
offset + 2, offset + 0, size, op);
return size;
}
static size_t
sja1105et_l2_lookup_params_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
const size_t size = SJA1105ET_SIZE_L2_LOOKUP_PARAMS_ENTRY;
struct sja1105_l2_lookup_params_entry *entry = entry_ptr;
sja1105_packing(buf, &entry->maxage, 31, 17, size, op);
sja1105_packing(buf, &entry->dyn_tbsz, 16, 14, size, op);
sja1105_packing(buf, &entry->poly, 13, 6, size, op);
sja1105_packing(buf, &entry->shared_learn, 5, 5, size, op);
sja1105_packing(buf, &entry->no_enf_hostprt, 4, 4, size, op);
sja1105_packing(buf, &entry->no_mgmt_learn, 3, 3, size, op);
return size;
}
static size_t
sja1105pqrs_l2_lookup_params_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
const size_t size = SJA1105PQRS_SIZE_L2_LOOKUP_PARAMS_ENTRY;
struct sja1105_l2_lookup_params_entry *entry = entry_ptr;
sja1105_packing(buf, &entry->maxage, 57, 43, size, op);
sja1105_packing(buf, &entry->shared_learn, 27, 27, size, op);
sja1105_packing(buf, &entry->no_enf_hostprt, 26, 26, size, op);
sja1105_packing(buf, &entry->no_mgmt_learn, 25, 25, size, op);
return size;
}
size_t sja1105et_l2_lookup_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
const size_t size = SJA1105ET_SIZE_L2_LOOKUP_ENTRY;
struct sja1105_l2_lookup_entry *entry = entry_ptr;
sja1105_packing(buf, &entry->vlanid, 95, 84, size, op);
sja1105_packing(buf, &entry->macaddr, 83, 36, size, op);
sja1105_packing(buf, &entry->destports, 35, 31, size, op);
sja1105_packing(buf, &entry->enfport, 30, 30, size, op);
sja1105_packing(buf, &entry->index, 29, 20, size, op);
return size;
}
size_t sja1105pqrs_l2_lookup_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
const size_t size = SJA1105PQRS_SIZE_L2_LOOKUP_ENTRY;
struct sja1105_l2_lookup_entry *entry = entry_ptr;
/* These are static L2 lookup entries, so the structure
* should match UM11040 Table 16/17 definitions when
* LOCKEDS is 1.
*/
sja1105_packing(buf, &entry->vlanid, 81, 70, size, op);
sja1105_packing(buf, &entry->macaddr, 69, 22, size, op);
sja1105_packing(buf, &entry->destports, 21, 17, size, op);
sja1105_packing(buf, &entry->enfport, 16, 16, size, op);
sja1105_packing(buf, &entry->index, 15, 6, size, op);
return size;
}
static size_t sja1105_l2_policing_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
const size_t size = SJA1105_SIZE_L2_POLICING_ENTRY;
struct sja1105_l2_policing_entry *entry = entry_ptr;
sja1105_packing(buf, &entry->sharindx, 63, 58, size, op);
sja1105_packing(buf, &entry->smax, 57, 42, size, op);
sja1105_packing(buf, &entry->rate, 41, 26, size, op);
sja1105_packing(buf, &entry->maxlen, 25, 15, size, op);
sja1105_packing(buf, &entry->partition, 14, 12, size, op);
return size;
}
static size_t sja1105et_mac_config_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
const size_t size = SJA1105ET_SIZE_MAC_CONFIG_ENTRY;
struct sja1105_mac_config_entry *entry = entry_ptr;
int offset, i;
for (i = 0, offset = 72; i < 8; i++, offset += 19) {
sja1105_packing(buf, &entry->enabled[i],
offset + 0, offset + 0, size, op);
sja1105_packing(buf, &entry->base[i],
offset + 9, offset + 1, size, op);
sja1105_packing(buf, &entry->top[i],
offset + 18, offset + 10, size, op);
}
sja1105_packing(buf, &entry->ifg, 71, 67, size, op);
sja1105_packing(buf, &entry->speed, 66, 65, size, op);
sja1105_packing(buf, &entry->tp_delin, 64, 49, size, op);
sja1105_packing(buf, &entry->tp_delout, 48, 33, size, op);
sja1105_packing(buf, &entry->maxage, 32, 25, size, op);
sja1105_packing(buf, &entry->vlanprio, 24, 22, size, op);
sja1105_packing(buf, &entry->vlanid, 21, 10, size, op);
sja1105_packing(buf, &entry->ing_mirr, 9, 9, size, op);
sja1105_packing(buf, &entry->egr_mirr, 8, 8, size, op);
sja1105_packing(buf, &entry->drpnona664, 7, 7, size, op);
sja1105_packing(buf, &entry->drpdtag, 6, 6, size, op);
sja1105_packing(buf, &entry->drpuntag, 5, 5, size, op);
sja1105_packing(buf, &entry->retag, 4, 4, size, op);
sja1105_packing(buf, &entry->dyn_learn, 3, 3, size, op);
sja1105_packing(buf, &entry->egress, 2, 2, size, op);
sja1105_packing(buf, &entry->ingress, 1, 1, size, op);
return size;
}
size_t sja1105pqrs_mac_config_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
const size_t size = SJA1105PQRS_SIZE_MAC_CONFIG_ENTRY;
struct sja1105_mac_config_entry *entry = entry_ptr;
int offset, i;
for (i = 0, offset = 104; i < 8; i++, offset += 19) {
sja1105_packing(buf, &entry->enabled[i],
offset + 0, offset + 0, size, op);
sja1105_packing(buf, &entry->base[i],
offset + 9, offset + 1, size, op);
sja1105_packing(buf, &entry->top[i],
offset + 18, offset + 10, size, op);
}
sja1105_packing(buf, &entry->ifg, 103, 99, size, op);
sja1105_packing(buf, &entry->speed, 98, 97, size, op);
sja1105_packing(buf, &entry->tp_delin, 96, 81, size, op);
sja1105_packing(buf, &entry->tp_delout, 80, 65, size, op);
sja1105_packing(buf, &entry->maxage, 64, 57, size, op);
sja1105_packing(buf, &entry->vlanprio, 56, 54, size, op);
sja1105_packing(buf, &entry->vlanid, 53, 42, size, op);
sja1105_packing(buf, &entry->ing_mirr, 41, 41, size, op);
sja1105_packing(buf, &entry->egr_mirr, 40, 40, size, op);
sja1105_packing(buf, &entry->drpnona664, 39, 39, size, op);
sja1105_packing(buf, &entry->drpdtag, 38, 38, size, op);
sja1105_packing(buf, &entry->drpuntag, 35, 35, size, op);
sja1105_packing(buf, &entry->retag, 34, 34, size, op);
sja1105_packing(buf, &entry->dyn_learn, 33, 33, size, op);
sja1105_packing(buf, &entry->egress, 32, 32, size, op);
sja1105_packing(buf, &entry->ingress, 31, 31, size, op);
return size;
}
size_t sja1105_vlan_lookup_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
const size_t size = SJA1105_SIZE_VLAN_LOOKUP_ENTRY;
struct sja1105_vlan_lookup_entry *entry = entry_ptr;
sja1105_packing(buf, &entry->ving_mirr, 63, 59, size, op);
sja1105_packing(buf, &entry->vegr_mirr, 58, 54, size, op);
sja1105_packing(buf, &entry->vmemb_port, 53, 49, size, op);
sja1105_packing(buf, &entry->vlan_bc, 48, 44, size, op);
sja1105_packing(buf, &entry->tag_port, 43, 39, size, op);
sja1105_packing(buf, &entry->vlanid, 38, 27, size, op);
return size;
}
static size_t sja1105_xmii_params_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
const size_t size = SJA1105_SIZE_XMII_PARAMS_ENTRY;
struct sja1105_xmii_params_entry *entry = entry_ptr;
int offset, i;
for (i = 0, offset = 17; i < 5; i++, offset += 3) {
sja1105_packing(buf, &entry->xmii_mode[i],
offset + 1, offset + 0, size, op);
sja1105_packing(buf, &entry->phy_mac[i],
offset + 2, offset + 2, size, op);
}
return size;
}
size_t sja1105_table_header_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
const size_t size = SJA1105_SIZE_TABLE_HEADER;
struct sja1105_table_header *entry = entry_ptr;
sja1105_packing(buf, &entry->block_id, 31, 24, size, op);
sja1105_packing(buf, &entry->len, 55, 32, size, op);
sja1105_packing(buf, &entry->crc, 95, 64, size, op);
return size;
}
/* WARNING: the *hdr pointer is really non-const, because it is
* modifying the CRC of the header for a 2-stage packing operation
*/
void
sja1105_table_header_pack_with_crc(void *buf, struct sja1105_table_header *hdr)
{
/* First pack the table as-is, then calculate the CRC, and
* finally put the proper CRC into the packed buffer
*/
memset(buf, 0, SJA1105_SIZE_TABLE_HEADER);
sja1105_table_header_packing(buf, hdr, PACK);
hdr->crc = sja1105_crc32(buf, SJA1105_SIZE_TABLE_HEADER - 4);
sja1105_pack(buf + SJA1105_SIZE_TABLE_HEADER - 4, &hdr->crc, 31, 0, 4);
}
static void sja1105_table_write_crc(u8 *table_start, u8 *crc_ptr)
{
u64 computed_crc;
int len_bytes;
len_bytes = (uintptr_t)(crc_ptr - table_start);
computed_crc = sja1105_crc32(table_start, len_bytes);
sja1105_pack(crc_ptr, &computed_crc, 31, 0, 4);
}
/* The block IDs that the switches support are unfortunately sparse, so keep a
* mapping table to "block indices" and translate back and forth so that we
* don't waste useless memory in struct sja1105_static_config.
* Also, since the block id comes from essentially untrusted input (unpacking
* the static config from userspace) it has to be sanitized (range-checked)
* before blindly indexing kernel memory with the blk_idx.
*/
static u64 blk_id_map[BLK_IDX_MAX] = {
[BLK_IDX_L2_LOOKUP] = BLKID_L2_LOOKUP,
[BLK_IDX_L2_POLICING] = BLKID_L2_POLICING,
[BLK_IDX_VLAN_LOOKUP] = BLKID_VLAN_LOOKUP,
[BLK_IDX_L2_FORWARDING] = BLKID_L2_FORWARDING,
[BLK_IDX_MAC_CONFIG] = BLKID_MAC_CONFIG,
[BLK_IDX_L2_LOOKUP_PARAMS] = BLKID_L2_LOOKUP_PARAMS,
[BLK_IDX_L2_FORWARDING_PARAMS] = BLKID_L2_FORWARDING_PARAMS,
[BLK_IDX_GENERAL_PARAMS] = BLKID_GENERAL_PARAMS,
[BLK_IDX_XMII_PARAMS] = BLKID_XMII_PARAMS,
};
const char *sja1105_static_config_error_msg[] = {
[SJA1105_CONFIG_OK] = "",
[SJA1105_MISSING_L2_POLICING_TABLE] =
"l2-policing-table needs to have at least one entry",
[SJA1105_MISSING_L2_FORWARDING_TABLE] =
"l2-forwarding-table is either missing or incomplete",
[SJA1105_MISSING_L2_FORWARDING_PARAMS_TABLE] =
"l2-forwarding-parameters-table is missing",
[SJA1105_MISSING_GENERAL_PARAMS_TABLE] =
"general-parameters-table is missing",
[SJA1105_MISSING_VLAN_TABLE] =
"vlan-lookup-table needs to have at least the default untagged VLAN",
[SJA1105_MISSING_XMII_TABLE] =
"xmii-table is missing",
[SJA1105_MISSING_MAC_TABLE] =
"mac-configuration-table needs to contain an entry for each port",
[SJA1105_OVERCOMMITTED_FRAME_MEMORY] =
"Not allowed to overcommit frame memory. L2 memory partitions "
"and VL memory partitions share the same space. The sum of all "
"16 memory partitions is not allowed to be larger than 929 "
"128-byte blocks (or 910 with retagging). Please adjust "
"l2-forwarding-parameters-table.part_spc and/or "
"vl-forwarding-parameters-table.partspc.",
};
sja1105_config_valid_t
static_config_check_memory_size(const struct sja1105_table *tables)
{
const struct sja1105_l2_forwarding_params_entry *l2_fwd_params;
int i, mem = 0;
l2_fwd_params = tables[BLK_IDX_L2_FORWARDING_PARAMS].entries;
for (i = 0; i < 8; i++)
mem += l2_fwd_params->part_spc[i];
if (mem > SJA1105_MAX_FRAME_MEMORY)
return SJA1105_OVERCOMMITTED_FRAME_MEMORY;
return SJA1105_CONFIG_OK;
}
sja1105_config_valid_t
sja1105_static_config_check_valid(const struct sja1105_static_config *config)
{
const struct sja1105_table *tables = config->tables;
#define IS_FULL(blk_idx) \
(tables[blk_idx].entry_count == tables[blk_idx].ops->max_entry_count)
if (tables[BLK_IDX_L2_POLICING].entry_count == 0)
return SJA1105_MISSING_L2_POLICING_TABLE;
if (tables[BLK_IDX_VLAN_LOOKUP].entry_count == 0)
return SJA1105_MISSING_VLAN_TABLE;
if (!IS_FULL(BLK_IDX_L2_FORWARDING))
return SJA1105_MISSING_L2_FORWARDING_TABLE;
if (!IS_FULL(BLK_IDX_MAC_CONFIG))
return SJA1105_MISSING_MAC_TABLE;
if (!IS_FULL(BLK_IDX_L2_FORWARDING_PARAMS))
return SJA1105_MISSING_L2_FORWARDING_PARAMS_TABLE;
if (!IS_FULL(BLK_IDX_GENERAL_PARAMS))
return SJA1105_MISSING_GENERAL_PARAMS_TABLE;
if (!IS_FULL(BLK_IDX_XMII_PARAMS))
return SJA1105_MISSING_XMII_TABLE;
return static_config_check_memory_size(tables);
#undef IS_FULL
}
void
sja1105_static_config_pack(void *buf, struct sja1105_static_config *config)
{
struct sja1105_table_header header = {0};
enum sja1105_blk_idx i;
char *p = buf;
int j;
sja1105_pack(p, &config->device_id, 31, 0, 4);
p += SJA1105_SIZE_DEVICE_ID;
for (i = 0; i < BLK_IDX_MAX; i++) {
const struct sja1105_table *table;
char *table_start;
table = &config->tables[i];
if (!table->entry_count)
continue;
header.block_id = blk_id_map[i];
header.len = table->entry_count *
table->ops->packed_entry_size / 4;
sja1105_table_header_pack_with_crc(p, &header);
p += SJA1105_SIZE_TABLE_HEADER;
table_start = p;
for (j = 0; j < table->entry_count; j++) {
u8 *entry_ptr = table->entries;
entry_ptr += j * table->ops->unpacked_entry_size;
memset(p, 0, table->ops->packed_entry_size);
table->ops->packing(p, entry_ptr, PACK);
p += table->ops->packed_entry_size;
}
sja1105_table_write_crc(table_start, p);
p += 4;
}
/* Final header:
* Block ID does not matter
* Length of 0 marks that header is final
* CRC will be replaced on-the-fly on "config upload"
*/
header.block_id = 0;
header.len = 0;
header.crc = 0xDEADBEEF;
memset(p, 0, SJA1105_SIZE_TABLE_HEADER);
sja1105_table_header_packing(p, &header, PACK);
}
size_t
sja1105_static_config_get_length(const struct sja1105_static_config *config)
{
unsigned int sum;
unsigned int header_count;
enum sja1105_blk_idx i;
/* Ending header */
header_count = 1;
sum = SJA1105_SIZE_DEVICE_ID;
/* Tables (headers and entries) */
for (i = 0; i < BLK_IDX_MAX; i++) {
const struct sja1105_table *table;
table = &config->tables[i];
if (table->entry_count)
header_count++;
sum += table->ops->packed_entry_size * table->entry_count;
}
/* Headers have an additional CRC at the end */
sum += header_count * (SJA1105_SIZE_TABLE_HEADER + 4);
/* Last header does not have an extra CRC because there is no data */
sum -= 4;
return sum;
}
/* Compatibility matrices */
/* SJA1105E: First generation, no TTEthernet */
struct sja1105_table_ops sja1105e_table_ops[BLK_IDX_MAX] = {
[BLK_IDX_L2_LOOKUP] = {
.packing = sja1105et_l2_lookup_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry),
.packed_entry_size = SJA1105ET_SIZE_L2_LOOKUP_ENTRY,
.max_entry_count = SJA1105_MAX_L2_LOOKUP_COUNT,
},
[BLK_IDX_L2_POLICING] = {
.packing = sja1105_l2_policing_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_policing_entry),
.packed_entry_size = SJA1105_SIZE_L2_POLICING_ENTRY,
.max_entry_count = SJA1105_MAX_L2_POLICING_COUNT,
},
[BLK_IDX_VLAN_LOOKUP] = {
.packing = sja1105_vlan_lookup_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_vlan_lookup_entry),
.packed_entry_size = SJA1105_SIZE_VLAN_LOOKUP_ENTRY,
.max_entry_count = SJA1105_MAX_VLAN_LOOKUP_COUNT,
},
[BLK_IDX_L2_FORWARDING] = {
.packing = sja1105_l2_forwarding_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_entry),
.packed_entry_size = SJA1105_SIZE_L2_FORWARDING_ENTRY,
.max_entry_count = SJA1105_MAX_L2_FORWARDING_COUNT,
},
[BLK_IDX_MAC_CONFIG] = {
.packing = sja1105et_mac_config_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_mac_config_entry),
.packed_entry_size = SJA1105ET_SIZE_MAC_CONFIG_ENTRY,
.max_entry_count = SJA1105_MAX_MAC_CONFIG_COUNT,
},
[BLK_IDX_L2_LOOKUP_PARAMS] = {
.packing = sja1105et_l2_lookup_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry),
.packed_entry_size = SJA1105ET_SIZE_L2_LOOKUP_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_L2_LOOKUP_PARAMS_COUNT,
},
[BLK_IDX_L2_FORWARDING_PARAMS] = {
.packing = sja1105_l2_forwarding_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_params_entry),
.packed_entry_size = SJA1105_SIZE_L2_FORWARDING_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_L2_FORWARDING_PARAMS_COUNT,
},
[BLK_IDX_GENERAL_PARAMS] = {
.packing = sja1105et_general_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_general_params_entry),
.packed_entry_size = SJA1105ET_SIZE_GENERAL_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT,
},
[BLK_IDX_XMII_PARAMS] = {
.packing = sja1105_xmii_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
.packed_entry_size = SJA1105_SIZE_XMII_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_XMII_PARAMS_COUNT,
},
};
/* SJA1105T: First generation, TTEthernet */
struct sja1105_table_ops sja1105t_table_ops[BLK_IDX_MAX] = {
[BLK_IDX_L2_LOOKUP] = {
.packing = sja1105et_l2_lookup_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry),
.packed_entry_size = SJA1105ET_SIZE_L2_LOOKUP_ENTRY,
.max_entry_count = SJA1105_MAX_L2_LOOKUP_COUNT,
},
[BLK_IDX_L2_POLICING] = {
.packing = sja1105_l2_policing_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_policing_entry),
.packed_entry_size = SJA1105_SIZE_L2_POLICING_ENTRY,
.max_entry_count = SJA1105_MAX_L2_POLICING_COUNT,
},
[BLK_IDX_VLAN_LOOKUP] = {
.packing = sja1105_vlan_lookup_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_vlan_lookup_entry),
.packed_entry_size = SJA1105_SIZE_VLAN_LOOKUP_ENTRY,
.max_entry_count = SJA1105_MAX_VLAN_LOOKUP_COUNT,
},
[BLK_IDX_L2_FORWARDING] = {
.packing = sja1105_l2_forwarding_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_entry),
.packed_entry_size = SJA1105_SIZE_L2_FORWARDING_ENTRY,
.max_entry_count = SJA1105_MAX_L2_FORWARDING_COUNT,
},
[BLK_IDX_MAC_CONFIG] = {
.packing = sja1105et_mac_config_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_mac_config_entry),
.packed_entry_size = SJA1105ET_SIZE_MAC_CONFIG_ENTRY,
.max_entry_count = SJA1105_MAX_MAC_CONFIG_COUNT,
},
[BLK_IDX_L2_LOOKUP_PARAMS] = {
.packing = sja1105et_l2_lookup_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry),
.packed_entry_size = SJA1105ET_SIZE_L2_LOOKUP_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_L2_LOOKUP_PARAMS_COUNT,
},
[BLK_IDX_L2_FORWARDING_PARAMS] = {
.packing = sja1105_l2_forwarding_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_params_entry),
.packed_entry_size = SJA1105_SIZE_L2_FORWARDING_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_L2_FORWARDING_PARAMS_COUNT,
},
[BLK_IDX_GENERAL_PARAMS] = {
.packing = sja1105et_general_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_general_params_entry),
.packed_entry_size = SJA1105ET_SIZE_GENERAL_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT,
},
[BLK_IDX_XMII_PARAMS] = {
.packing = sja1105_xmii_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
.packed_entry_size = SJA1105_SIZE_XMII_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_XMII_PARAMS_COUNT,
},
};
/* SJA1105P: Second generation, no TTEthernet, no SGMII */
struct sja1105_table_ops sja1105p_table_ops[BLK_IDX_MAX] = {
[BLK_IDX_L2_LOOKUP] = {
.packing = sja1105pqrs_l2_lookup_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry),
.packed_entry_size = SJA1105PQRS_SIZE_L2_LOOKUP_ENTRY,
.max_entry_count = SJA1105_MAX_L2_LOOKUP_COUNT,
},
[BLK_IDX_L2_POLICING] = {
.packing = sja1105_l2_policing_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_policing_entry),
.packed_entry_size = SJA1105_SIZE_L2_POLICING_ENTRY,
.max_entry_count = SJA1105_MAX_L2_POLICING_COUNT,
},
[BLK_IDX_VLAN_LOOKUP] = {
.packing = sja1105_vlan_lookup_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_vlan_lookup_entry),
.packed_entry_size = SJA1105_SIZE_VLAN_LOOKUP_ENTRY,
.max_entry_count = SJA1105_MAX_VLAN_LOOKUP_COUNT,
},
[BLK_IDX_L2_FORWARDING] = {
.packing = sja1105_l2_forwarding_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_entry),
.packed_entry_size = SJA1105_SIZE_L2_FORWARDING_ENTRY,
.max_entry_count = SJA1105_MAX_L2_FORWARDING_COUNT,
},
[BLK_IDX_MAC_CONFIG] = {
.packing = sja1105pqrs_mac_config_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_mac_config_entry),
.packed_entry_size = SJA1105PQRS_SIZE_MAC_CONFIG_ENTRY,
.max_entry_count = SJA1105_MAX_MAC_CONFIG_COUNT,
},
[BLK_IDX_L2_LOOKUP_PARAMS] = {
.packing = sja1105pqrs_l2_lookup_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry),
.packed_entry_size = SJA1105PQRS_SIZE_L2_LOOKUP_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_L2_LOOKUP_PARAMS_COUNT,
},
[BLK_IDX_L2_FORWARDING_PARAMS] = {
.packing = sja1105_l2_forwarding_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_params_entry),
.packed_entry_size = SJA1105_SIZE_L2_FORWARDING_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_L2_FORWARDING_PARAMS_COUNT,
},
[BLK_IDX_GENERAL_PARAMS] = {
.packing = sja1105pqrs_general_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_general_params_entry),
.packed_entry_size = SJA1105PQRS_SIZE_GENERAL_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT,
},
[BLK_IDX_XMII_PARAMS] = {
.packing = sja1105_xmii_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
.packed_entry_size = SJA1105_SIZE_XMII_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_XMII_PARAMS_COUNT,
},
};
/* SJA1105Q: Second generation, TTEthernet, no SGMII */
struct sja1105_table_ops sja1105q_table_ops[BLK_IDX_MAX] = {
[BLK_IDX_L2_LOOKUP] = {
.packing = sja1105pqrs_l2_lookup_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry),
.packed_entry_size = SJA1105PQRS_SIZE_L2_LOOKUP_ENTRY,
.max_entry_count = SJA1105_MAX_L2_LOOKUP_COUNT,
},
[BLK_IDX_L2_POLICING] = {
.packing = sja1105_l2_policing_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_policing_entry),
.packed_entry_size = SJA1105_SIZE_L2_POLICING_ENTRY,
.max_entry_count = SJA1105_MAX_L2_POLICING_COUNT,
},
[BLK_IDX_VLAN_LOOKUP] = {
.packing = sja1105_vlan_lookup_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_vlan_lookup_entry),
.packed_entry_size = SJA1105_SIZE_VLAN_LOOKUP_ENTRY,
.max_entry_count = SJA1105_MAX_VLAN_LOOKUP_COUNT,
},
[BLK_IDX_L2_FORWARDING] = {
.packing = sja1105_l2_forwarding_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_entry),
.packed_entry_size = SJA1105_SIZE_L2_FORWARDING_ENTRY,
.max_entry_count = SJA1105_MAX_L2_FORWARDING_COUNT,
},
[BLK_IDX_MAC_CONFIG] = {
.packing = sja1105pqrs_mac_config_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_mac_config_entry),
.packed_entry_size = SJA1105PQRS_SIZE_MAC_CONFIG_ENTRY,
.max_entry_count = SJA1105_MAX_MAC_CONFIG_COUNT,
},
[BLK_IDX_L2_LOOKUP_PARAMS] = {
.packing = sja1105pqrs_l2_lookup_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry),
.packed_entry_size = SJA1105PQRS_SIZE_L2_LOOKUP_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_L2_LOOKUP_PARAMS_COUNT,
},
[BLK_IDX_L2_FORWARDING_PARAMS] = {
.packing = sja1105_l2_forwarding_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_params_entry),
.packed_entry_size = SJA1105_SIZE_L2_FORWARDING_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_L2_FORWARDING_PARAMS_COUNT,
},
[BLK_IDX_GENERAL_PARAMS] = {
.packing = sja1105pqrs_general_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_general_params_entry),
.packed_entry_size = SJA1105PQRS_SIZE_GENERAL_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT,
},
[BLK_IDX_XMII_PARAMS] = {
.packing = sja1105_xmii_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
.packed_entry_size = SJA1105_SIZE_XMII_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_XMII_PARAMS_COUNT,
},
};
/* SJA1105R: Second generation, no TTEthernet, SGMII */
struct sja1105_table_ops sja1105r_table_ops[BLK_IDX_MAX] = {
[BLK_IDX_L2_LOOKUP] = {
.packing = sja1105pqrs_l2_lookup_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry),
.packed_entry_size = SJA1105PQRS_SIZE_L2_LOOKUP_ENTRY,
.max_entry_count = SJA1105_MAX_L2_LOOKUP_COUNT,
},
[BLK_IDX_L2_POLICING] = {
.packing = sja1105_l2_policing_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_policing_entry),
.packed_entry_size = SJA1105_SIZE_L2_POLICING_ENTRY,
.max_entry_count = SJA1105_MAX_L2_POLICING_COUNT,
},
[BLK_IDX_VLAN_LOOKUP] = {
.packing = sja1105_vlan_lookup_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_vlan_lookup_entry),
.packed_entry_size = SJA1105_SIZE_VLAN_LOOKUP_ENTRY,
.max_entry_count = SJA1105_MAX_VLAN_LOOKUP_COUNT,
},
[BLK_IDX_L2_FORWARDING] = {
.packing = sja1105_l2_forwarding_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_entry),
.packed_entry_size = SJA1105_SIZE_L2_FORWARDING_ENTRY,
.max_entry_count = SJA1105_MAX_L2_FORWARDING_COUNT,
},
[BLK_IDX_MAC_CONFIG] = {
.packing = sja1105pqrs_mac_config_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_mac_config_entry),
.packed_entry_size = SJA1105PQRS_SIZE_MAC_CONFIG_ENTRY,
.max_entry_count = SJA1105_MAX_MAC_CONFIG_COUNT,
},
[BLK_IDX_L2_LOOKUP_PARAMS] = {
.packing = sja1105pqrs_l2_lookup_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry),
.packed_entry_size = SJA1105PQRS_SIZE_L2_LOOKUP_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_L2_LOOKUP_PARAMS_COUNT,
},
[BLK_IDX_L2_FORWARDING_PARAMS] = {
.packing = sja1105_l2_forwarding_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_params_entry),
.packed_entry_size = SJA1105_SIZE_L2_FORWARDING_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_L2_FORWARDING_PARAMS_COUNT,
},
[BLK_IDX_GENERAL_PARAMS] = {
.packing = sja1105pqrs_general_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_general_params_entry),
.packed_entry_size = SJA1105PQRS_SIZE_GENERAL_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT,
},
[BLK_IDX_XMII_PARAMS] = {
.packing = sja1105_xmii_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
.packed_entry_size = SJA1105_SIZE_XMII_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_XMII_PARAMS_COUNT,
},
};
/* SJA1105S: Second generation, TTEthernet, SGMII */
struct sja1105_table_ops sja1105s_table_ops[BLK_IDX_MAX] = {
[BLK_IDX_L2_LOOKUP] = {
.packing = sja1105pqrs_l2_lookup_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry),
.packed_entry_size = SJA1105PQRS_SIZE_L2_LOOKUP_ENTRY,
.max_entry_count = SJA1105_MAX_L2_LOOKUP_COUNT,
},
[BLK_IDX_L2_POLICING] = {
.packing = sja1105_l2_policing_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_policing_entry),
.packed_entry_size = SJA1105_SIZE_L2_POLICING_ENTRY,
.max_entry_count = SJA1105_MAX_L2_POLICING_COUNT,
},
[BLK_IDX_VLAN_LOOKUP] = {
.packing = sja1105_vlan_lookup_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_vlan_lookup_entry),
.packed_entry_size = SJA1105_SIZE_VLAN_LOOKUP_ENTRY,
.max_entry_count = SJA1105_MAX_VLAN_LOOKUP_COUNT,
},
[BLK_IDX_L2_FORWARDING] = {
.packing = sja1105_l2_forwarding_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_entry),
.packed_entry_size = SJA1105_SIZE_L2_FORWARDING_ENTRY,
.max_entry_count = SJA1105_MAX_L2_FORWARDING_COUNT,
},
[BLK_IDX_MAC_CONFIG] = {
.packing = sja1105pqrs_mac_config_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_mac_config_entry),
.packed_entry_size = SJA1105PQRS_SIZE_MAC_CONFIG_ENTRY,
.max_entry_count = SJA1105_MAX_MAC_CONFIG_COUNT,
},
[BLK_IDX_L2_LOOKUP_PARAMS] = {
.packing = sja1105pqrs_l2_lookup_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry),
.packed_entry_size = SJA1105PQRS_SIZE_L2_LOOKUP_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_L2_LOOKUP_PARAMS_COUNT,
},
[BLK_IDX_L2_FORWARDING_PARAMS] = {
.packing = sja1105_l2_forwarding_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_params_entry),
.packed_entry_size = SJA1105_SIZE_L2_FORWARDING_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_L2_FORWARDING_PARAMS_COUNT,
},
[BLK_IDX_GENERAL_PARAMS] = {
.packing = sja1105pqrs_general_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_general_params_entry),
.packed_entry_size = SJA1105PQRS_SIZE_GENERAL_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT,
},
[BLK_IDX_XMII_PARAMS] = {
.packing = sja1105_xmii_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
.packed_entry_size = SJA1105_SIZE_XMII_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_XMII_PARAMS_COUNT,
},
};
int sja1105_static_config_init(struct sja1105_static_config *config,
const struct sja1105_table_ops *static_ops,
u64 device_id)
{
enum sja1105_blk_idx i;
*config = (struct sja1105_static_config) {0};
/* Transfer static_ops array from priv into per-table ops
* for handier access
*/
for (i = 0; i < BLK_IDX_MAX; i++)
config->tables[i].ops = &static_ops[i];
config->device_id = device_id;
return 0;
}
void sja1105_static_config_free(struct sja1105_static_config *config)
{
enum sja1105_blk_idx i;
for (i = 0; i < BLK_IDX_MAX; i++) {
if (config->tables[i].entry_count) {
kfree(config->tables[i].entries);
config->tables[i].entry_count = 0;
}
}
}
int sja1105_table_delete_entry(struct sja1105_table *table, int i)
{
size_t entry_size = table->ops->unpacked_entry_size;
u8 *entries = table->entries;
if (i > table->entry_count)
return -ERANGE;
memmove(entries + i * entry_size, entries + (i + 1) * entry_size,
(table->entry_count - i) * entry_size);
table->entry_count--;
return 0;
}
/* No pointers to table->entries should be kept when this is called. */
int sja1105_table_resize(struct sja1105_table *table, size_t new_count)
{
size_t entry_size = table->ops->unpacked_entry_size;
void *new_entries, *old_entries = table->entries;
if (new_count > table->ops->max_entry_count)
return -ERANGE;
new_entries = kcalloc(new_count, entry_size, GFP_KERNEL);
if (!new_entries)
return -ENOMEM;
memcpy(new_entries, old_entries, min(new_count, table->entry_count) *
entry_size);
table->entries = new_entries;
table->entry_count = new_count;
kfree(old_entries);
return 0;
}
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright (c) 2016-2018, NXP Semiconductors
* Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
*/
#ifndef _SJA1105_STATIC_CONFIG_H
#define _SJA1105_STATIC_CONFIG_H
#include <linux/packing.h>
#include <linux/types.h>
#include <asm/types.h>
#define SJA1105_SIZE_DEVICE_ID 4
#define SJA1105_SIZE_TABLE_HEADER 12
#define SJA1105_SIZE_L2_POLICING_ENTRY 8
#define SJA1105_SIZE_VLAN_LOOKUP_ENTRY 8
#define SJA1105_SIZE_L2_FORWARDING_ENTRY 8
#define SJA1105_SIZE_L2_FORWARDING_PARAMS_ENTRY 12
#define SJA1105_SIZE_XMII_PARAMS_ENTRY 4
#define SJA1105ET_SIZE_L2_LOOKUP_ENTRY 12
#define SJA1105ET_SIZE_MAC_CONFIG_ENTRY 28
#define SJA1105ET_SIZE_L2_LOOKUP_PARAMS_ENTRY 4
#define SJA1105ET_SIZE_GENERAL_PARAMS_ENTRY 40
#define SJA1105PQRS_SIZE_L2_LOOKUP_ENTRY 20
#define SJA1105PQRS_SIZE_MAC_CONFIG_ENTRY 32
#define SJA1105PQRS_SIZE_L2_LOOKUP_PARAMS_ENTRY 16
#define SJA1105PQRS_SIZE_GENERAL_PARAMS_ENTRY 44
/* UM10944.pdf Page 11, Table 2. Configuration Blocks */
enum {
BLKID_L2_LOOKUP = 0x05,
BLKID_L2_POLICING = 0x06,
BLKID_VLAN_LOOKUP = 0x07,
BLKID_L2_FORWARDING = 0x08,
BLKID_MAC_CONFIG = 0x09,
BLKID_L2_LOOKUP_PARAMS = 0x0D,
BLKID_L2_FORWARDING_PARAMS = 0x0E,
BLKID_GENERAL_PARAMS = 0x11,
BLKID_XMII_PARAMS = 0x4E,
};
enum sja1105_blk_idx {
BLK_IDX_L2_LOOKUP = 0,
BLK_IDX_L2_POLICING,
BLK_IDX_VLAN_LOOKUP,
BLK_IDX_L2_FORWARDING,
BLK_IDX_MAC_CONFIG,
BLK_IDX_L2_LOOKUP_PARAMS,
BLK_IDX_L2_FORWARDING_PARAMS,
BLK_IDX_GENERAL_PARAMS,
BLK_IDX_XMII_PARAMS,
BLK_IDX_MAX,
/* Fake block indices that are only valid for dynamic access */
BLK_IDX_MGMT_ROUTE,
BLK_IDX_MAX_DYN,
BLK_IDX_INVAL = -1,
};
#define SJA1105_MAX_L2_LOOKUP_COUNT 1024
#define SJA1105_MAX_L2_POLICING_COUNT 45
#define SJA1105_MAX_VLAN_LOOKUP_COUNT 4096
#define SJA1105_MAX_L2_FORWARDING_COUNT 13
#define SJA1105_MAX_MAC_CONFIG_COUNT 5
#define SJA1105_MAX_L2_LOOKUP_PARAMS_COUNT 1
#define SJA1105_MAX_L2_FORWARDING_PARAMS_COUNT 1
#define SJA1105_MAX_GENERAL_PARAMS_COUNT 1
#define SJA1105_MAX_XMII_PARAMS_COUNT 1
#define SJA1105_MAX_FRAME_MEMORY 929
#define SJA1105E_DEVICE_ID 0x9C00000Cull
#define SJA1105T_DEVICE_ID 0x9E00030Eull
#define SJA1105PR_DEVICE_ID 0xAF00030Eull
#define SJA1105QS_DEVICE_ID 0xAE00030Eull
#define SJA1105ET_PART_NO 0x9A83
#define SJA1105P_PART_NO 0x9A84
#define SJA1105Q_PART_NO 0x9A85
#define SJA1105R_PART_NO 0x9A86
#define SJA1105S_PART_NO 0x9A87
struct sja1105_general_params_entry {
u64 vllupformat;
u64 mirr_ptacu;
u64 switchid;
u64 hostprio;
u64 mac_fltres1;
u64 mac_fltres0;
u64 mac_flt1;
u64 mac_flt0;
u64 incl_srcpt1;
u64 incl_srcpt0;
u64 send_meta1;
u64 send_meta0;
u64 casc_port;
u64 host_port;
u64 mirr_port;
u64 vlmarker;
u64 vlmask;
u64 tpid;
u64 ignore2stf;
u64 tpid2;
/* P/Q/R/S only */
u64 queue_ts;
u64 egrmirrvid;
u64 egrmirrpcp;
u64 egrmirrdei;
u64 replay_port;
};
struct sja1105_vlan_lookup_entry {
u64 ving_mirr;
u64 vegr_mirr;
u64 vmemb_port;
u64 vlan_bc;
u64 tag_port;
u64 vlanid;
};
struct sja1105_l2_lookup_entry {
u64 vlanid;
u64 macaddr;
u64 destports;
u64 enfport;
u64 index;
};
struct sja1105_l2_lookup_params_entry {
u64 maxage; /* Shared */
u64 dyn_tbsz; /* E/T only */
u64 poly; /* E/T only */
u64 shared_learn; /* Shared */
u64 no_enf_hostprt; /* Shared */
u64 no_mgmt_learn; /* Shared */
};
struct sja1105_l2_forwarding_entry {
u64 bc_domain;
u64 reach_port;
u64 fl_domain;
u64 vlan_pmap[8];
};
struct sja1105_l2_forwarding_params_entry {
u64 max_dynp;
u64 part_spc[8];
};
struct sja1105_l2_policing_entry {
u64 sharindx;
u64 smax;
u64 rate;
u64 maxlen;
u64 partition;
};
struct sja1105_mac_config_entry {
u64 top[8];
u64 base[8];
u64 enabled[8];
u64 ifg;
u64 speed;
u64 tp_delin;
u64 tp_delout;
u64 maxage;
u64 vlanprio;
u64 vlanid;
u64 ing_mirr;
u64 egr_mirr;
u64 drpnona664;
u64 drpdtag;
u64 drpuntag;
u64 retag;
u64 dyn_learn;
u64 egress;
u64 ingress;
};
struct sja1105_xmii_params_entry {
u64 phy_mac[5];
u64 xmii_mode[5];
};
struct sja1105_table_header {
u64 block_id;
u64 len;
u64 crc;
};
struct sja1105_table_ops {
size_t (*packing)(void *buf, void *entry_ptr, enum packing_op op);
size_t unpacked_entry_size;
size_t packed_entry_size;
size_t max_entry_count;
};
struct sja1105_table {
const struct sja1105_table_ops *ops;
size_t entry_count;
void *entries;
};
struct sja1105_static_config {
u64 device_id;
struct sja1105_table tables[BLK_IDX_MAX];
};
extern struct sja1105_table_ops sja1105e_table_ops[BLK_IDX_MAX];
extern struct sja1105_table_ops sja1105t_table_ops[BLK_IDX_MAX];
extern struct sja1105_table_ops sja1105p_table_ops[BLK_IDX_MAX];
extern struct sja1105_table_ops sja1105q_table_ops[BLK_IDX_MAX];
extern struct sja1105_table_ops sja1105r_table_ops[BLK_IDX_MAX];
extern struct sja1105_table_ops sja1105s_table_ops[BLK_IDX_MAX];
size_t sja1105_table_header_packing(void *buf, void *hdr, enum packing_op op);
void
sja1105_table_header_pack_with_crc(void *buf, struct sja1105_table_header *hdr);
size_t
sja1105_static_config_get_length(const struct sja1105_static_config *config);
typedef enum {
SJA1105_CONFIG_OK = 0,
SJA1105_MISSING_L2_POLICING_TABLE,
SJA1105_MISSING_L2_FORWARDING_TABLE,
SJA1105_MISSING_L2_FORWARDING_PARAMS_TABLE,
SJA1105_MISSING_GENERAL_PARAMS_TABLE,
SJA1105_MISSING_VLAN_TABLE,
SJA1105_MISSING_XMII_TABLE,
SJA1105_MISSING_MAC_TABLE,
SJA1105_OVERCOMMITTED_FRAME_MEMORY,
} sja1105_config_valid_t;
extern const char *sja1105_static_config_error_msg[];
sja1105_config_valid_t
sja1105_static_config_check_valid(const struct sja1105_static_config *config);
void
sja1105_static_config_pack(void *buf, struct sja1105_static_config *config);
int sja1105_static_config_init(struct sja1105_static_config *config,
const struct sja1105_table_ops *static_ops,
u64 device_id);
void sja1105_static_config_free(struct sja1105_static_config *config);
int sja1105_table_delete_entry(struct sja1105_table *table, int i);
int sja1105_table_resize(struct sja1105_table *table, size_t new_count);
u32 sja1105_crc32(const void *buf, size_t len);
void sja1105_pack(void *buf, const u64 *val, int start, int end, size_t len);
void sja1105_unpack(const void *buf, u64 *val, int start, int end, size_t len);
void sja1105_packing(void *buf, u64 *val, int start, int end,
size_t len, enum packing_op op);
#endif
/* SPDX-License-Identifier: GPL-2.0
* Copyright (c) 2019, Vladimir Oltean <olteanv@gmail.com>
*/
/* Included by drivers/net/dsa/sja1105/sja1105.h */
#ifndef _NET_DSA_SJA1105_H
#define _NET_DSA_SJA1105_H
#include <linux/etherdevice.h>
#define ETH_P_SJA1105 ETH_P_DSA_8021Q
/* The switch can only be convinced to stay in unmanaged mode and not trap any
* link-local traffic by actually telling it to filter frames sent at the
* 00:00:00:00:00:00 destination MAC.
*/
#define SJA1105_LINKLOCAL_FILTER_A 0x000000000000ull
#define SJA1105_LINKLOCAL_FILTER_A_MASK 0xFFFFFFFFFFFFull
#define SJA1105_LINKLOCAL_FILTER_B 0x000000000000ull
#define SJA1105_LINKLOCAL_FILTER_B_MASK 0xFFFFFFFFFFFFull
#endif /* _NET_DSA_SJA1105_H */
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright (c) 2016-2018, NXP Semiconductors
* Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
*/
#ifndef _LINUX_PACKING_H
#define _LINUX_PACKING_H
#include <linux/types.h>
#include <linux/bitops.h>
#define QUIRK_MSB_ON_THE_RIGHT BIT(0)
#define QUIRK_LITTLE_ENDIAN BIT(1)
#define QUIRK_LSW32_IS_FIRST BIT(2)
enum packing_op {
PACK,
UNPACK,
};
/**
* packing - Convert numbers (currently u64) between a packed and an unpacked
* format. Unpacked means laid out in memory in the CPU's native
* understanding of integers, while packed means anything else that
* requires translation.
*
* @pbuf: Pointer to a buffer holding the packed value.
* @uval: Pointer to an u64 holding the unpacked value.
* @startbit: The index (in logical notation, compensated for quirks) where
* the packed value starts within pbuf. Must be larger than, or
* equal to, endbit.
* @endbit: The index (in logical notation, compensated for quirks) where
* the packed value ends within pbuf. Must be smaller than, or equal
* to, startbit.
* @op: If PACK, then uval will be treated as const pointer and copied (packed)
* into pbuf, between startbit and endbit.
* If UNPACK, then pbuf will be treated as const pointer and the logical
* value between startbit and endbit will be copied (unpacked) to uval.
* @quirks: A bit mask of QUIRK_LITTLE_ENDIAN, QUIRK_LSW32_IS_FIRST and
* QUIRK_MSB_ON_THE_RIGHT.
*
* Return: 0 on success, EINVAL or ERANGE if called incorrectly. Assuming
* correct usage, return code may be discarded.
* If op is PACK, pbuf is modified.
* If op is UNPACK, uval is modified.
*/
int packing(void *pbuf, u64 *uval, int startbit, int endbit, size_t pbuflen,
enum packing_op op, u8 quirks);
#endif
......@@ -109,6 +109,7 @@
#define ETH_P_QINQ2 0x9200 /* deprecated QinQ VLAN [ NOT AN OFFICIALLY REGISTERED ID ] */
#define ETH_P_QINQ3 0x9300 /* deprecated QinQ VLAN [ NOT AN OFFICIALLY REGISTERED ID ] */
#define ETH_P_EDSA 0xDADA /* Ethertype DSA [ NOT AN OFFICIALLY REGISTERED ID ] */
#define ETH_P_DSA_8021Q 0xDADB /* Fake VLAN Header for DSA [ NOT AN OFFICIALLY REGISTERED ID ] */
#define ETH_P_IFE 0xED3E /* ForCES inter-FE LFB type */
#define ETH_P_AF_IUCV 0xFBFB /* IBM af_iucv [ NOT AN OFFICIALLY REGISTERED ID ] */
......
......@@ -18,6 +18,23 @@ config RAID6_PQ_BENCHMARK
Benchmark all available RAID6 PQ functions on init and choose the
fastest one.
config PACKING
bool "Generic bitfield packing and unpacking"
default n
help
This option provides the packing() helper function, which permits
converting bitfields between a CPU-usable representation and a
memory representation that can have any combination of these quirks:
- Is little endian (bytes are reversed within a 32-bit group)
- The least-significant 32-bit word comes first (within a 64-bit
group)
- The most significant bit of a byte is at its right (bit 0 of a
register description is numerically 2^7).
Drivers may use these helpers to match the bit indices as described
in the data sheets of the peripherals they are in control of.
When in doubt, say N.
config BITREVERSE
tristate
......
......@@ -108,6 +108,7 @@ obj-$(CONFIG_DEBUG_LIST) += list_debug.o
obj-$(CONFIG_DEBUG_OBJECTS) += debugobjects.o
obj-$(CONFIG_BITREVERSE) += bitrev.o
obj-$(CONFIG_PACKING) += packing.o
obj-$(CONFIG_RATIONAL) += rational.o
obj-$(CONFIG_CRC_CCITT) += crc-ccitt.o
obj-$(CONFIG_CRC16) += crc16.o
......
// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
/* Copyright (c) 2016-2018, NXP Semiconductors
* Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
*/
#include <linux/packing.h>
#include <linux/module.h>
#include <linux/bitops.h>
#include <linux/errno.h>
#include <linux/types.h>
static int get_le_offset(int offset)
{
int closest_multiple_of_4;
closest_multiple_of_4 = (offset / 4) * 4;
offset -= closest_multiple_of_4;
return closest_multiple_of_4 + (3 - offset);
}
static int get_reverse_lsw32_offset(int offset, size_t len)
{
int closest_multiple_of_4;
int word_index;
word_index = offset / 4;
closest_multiple_of_4 = word_index * 4;
offset -= closest_multiple_of_4;
word_index = (len / 4) - word_index - 1;
return word_index * 4 + offset;
}
static u64 bit_reverse(u64 val, unsigned int width)
{
u64 new_val = 0;
unsigned int bit;
unsigned int i;
for (i = 0; i < width; i++) {
bit = (val & (1 << i)) != 0;
new_val |= (bit << (width - i - 1));
}
return new_val;
}
static void adjust_for_msb_right_quirk(u64 *to_write, int *box_start_bit,
int *box_end_bit, u8 *box_mask)
{
int box_bit_width = *box_start_bit - *box_end_bit + 1;
int new_box_start_bit, new_box_end_bit;
*to_write >>= *box_end_bit;
*to_write = bit_reverse(*to_write, box_bit_width);
*to_write <<= *box_end_bit;
new_box_end_bit = box_bit_width - *box_start_bit - 1;
new_box_start_bit = box_bit_width - *box_end_bit - 1;
*box_mask = GENMASK_ULL(new_box_start_bit, new_box_end_bit);
*box_start_bit = new_box_start_bit;
*box_end_bit = new_box_end_bit;
}
/**
* packing - Convert numbers (currently u64) between a packed and an unpacked
* format. Unpacked means laid out in memory in the CPU's native
* understanding of integers, while packed means anything else that
* requires translation.
*
* @pbuf: Pointer to a buffer holding the packed value.
* @uval: Pointer to an u64 holding the unpacked value.
* @startbit: The index (in logical notation, compensated for quirks) where
* the packed value starts within pbuf. Must be larger than, or
* equal to, endbit.
* @endbit: The index (in logical notation, compensated for quirks) where
* the packed value ends within pbuf. Must be smaller than, or equal
* to, startbit.
* @op: If PACK, then uval will be treated as const pointer and copied (packed)
* into pbuf, between startbit and endbit.
* If UNPACK, then pbuf will be treated as const pointer and the logical
* value between startbit and endbit will be copied (unpacked) to uval.
* @quirks: A bit mask of QUIRK_LITTLE_ENDIAN, QUIRK_LSW32_IS_FIRST and
* QUIRK_MSB_ON_THE_RIGHT.
*
* Return: 0 on success, EINVAL or ERANGE if called incorrectly. Assuming
* correct usage, return code may be discarded.
* If op is PACK, pbuf is modified.
* If op is UNPACK, uval is modified.
*/
int packing(void *pbuf, u64 *uval, int startbit, int endbit, size_t pbuflen,
enum packing_op op, u8 quirks)
{
/* Number of bits for storing "uval"
* also width of the field to access in the pbuf
*/
u64 value_width;
/* Logical byte indices corresponding to the
* start and end of the field.
*/
int plogical_first_u8, plogical_last_u8, box;
/* startbit is expected to be larger than endbit */
if (startbit < endbit)
/* Invalid function call */
return -EINVAL;
value_width = startbit - endbit + 1;
if (value_width > 64)
return -ERANGE;
/* Check if "uval" fits in "value_width" bits.
* If value_width is 64, the check will fail, but any
* 64-bit uval will surely fit.
*/
if (op == PACK && value_width < 64 && (*uval >= (1ull << value_width)))
/* Cannot store "uval" inside "value_width" bits.
* Truncating "uval" is most certainly not desirable,
* so simply erroring out is appropriate.
*/
return -ERANGE;
/* Initialize parameter */
if (op == UNPACK)
*uval = 0;
/* Iterate through an idealistic view of the pbuf as an u64 with
* no quirks, u8 by u8 (aligned at u8 boundaries), from high to low
* logical bit significance. "box" denotes the current logical u8.
*/
plogical_first_u8 = startbit / 8;
plogical_last_u8 = endbit / 8;
for (box = plogical_first_u8; box >= plogical_last_u8; box--) {
/* Bit indices into the currently accessed 8-bit box */
int box_start_bit, box_end_bit, box_addr;
u8 box_mask;
/* Corresponding bits from the unpacked u64 parameter */
int proj_start_bit, proj_end_bit;
u64 proj_mask;
/* This u8 may need to be accessed in its entirety
* (from bit 7 to bit 0), or not, depending on the
* input arguments startbit and endbit.
*/
if (box == plogical_first_u8)
box_start_bit = startbit % 8;
else
box_start_bit = 7;
if (box == plogical_last_u8)
box_end_bit = endbit % 8;
else
box_end_bit = 0;
/* We have determined the box bit start and end.
* Now we calculate where this (masked) u8 box would fit
* in the unpacked (CPU-readable) u64 - the u8 box's
* projection onto the unpacked u64. Though the
* box is u8, the projection is u64 because it may fall
* anywhere within the unpacked u64.
*/
proj_start_bit = ((box * 8) + box_start_bit) - endbit;
proj_end_bit = ((box * 8) + box_end_bit) - endbit;
proj_mask = GENMASK_ULL(proj_start_bit, proj_end_bit);
box_mask = GENMASK_ULL(box_start_bit, box_end_bit);
/* Determine the offset of the u8 box inside the pbuf,
* adjusted for quirks. The adjusted box_addr will be used for
* effective addressing inside the pbuf (so it's not
* logical any longer).
*/
box_addr = pbuflen - box - 1;
if (quirks & QUIRK_LITTLE_ENDIAN)
box_addr = get_le_offset(box_addr);
if (quirks & QUIRK_LSW32_IS_FIRST)
box_addr = get_reverse_lsw32_offset(box_addr,
pbuflen);
if (op == UNPACK) {
u64 pval;
/* Read from pbuf, write to uval */
pval = ((u8 *)pbuf)[box_addr] & box_mask;
if (quirks & QUIRK_MSB_ON_THE_RIGHT)
adjust_for_msb_right_quirk(&pval,
&box_start_bit,
&box_end_bit,
&box_mask);
pval >>= box_end_bit;
pval <<= proj_end_bit;
*uval &= ~proj_mask;
*uval |= pval;
} else {
u64 pval;
/* Write to pbuf, read from uval */
pval = (*uval) & proj_mask;
pval >>= proj_end_bit;
if (quirks & QUIRK_MSB_ON_THE_RIGHT)
adjust_for_msb_right_quirk(&pval,
&box_start_bit,
&box_end_bit,
&box_mask);
pval <<= box_end_bit;
((u8 *)pbuf)[box_addr] &= ~box_mask;
((u8 *)pbuf)[box_addr] |= pval;
}
}
return 0;
}
EXPORT_SYMBOL(packing);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Generic bitfield packing and unpacking");
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment