- 16 Mar, 2021 38 commits
-
-
Vladimir Oltean authored
Add a short summary of the devlink features supported by the DSA core. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
The documentation was already lagging behind by not mentioning the old version of port_bridge_flags (port_set_egress_floods). So now we are skipping one step and just explaining how a DSA driver should configure address learning and flooding settings. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
On one hand, the link is dead and therefore useless. On the other hand, there are always more drivers to port, but at this stage, DSA does not need to affirm itself as the driver model to use for Ethernet-connected switches (since we already have 15 tagging protocols supported and probably more switch families from various vendors), so there is nothing actionable to do. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
After the recent series containing commit bae33f2b ("net: switchdev: remove the transaction structure from port attributes"), there aren't prepare/commit transactional phases anymore in most of the switchdev objects/attributes, and as a result, there aren't any in the DSA driver API either. So remove this piece. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
After Vivien's series from 2019 containing commits 27d4d19d ("net: dsa: remove limitation of switch index value") and ab8ccae1 ("net: dsa: add ports list in the switch fabric"), this is basically no longer true. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
The chapter about tagging protocols is out of date because it doesn't mention all taggers that have been added since last documentation update. But judging based on that, it will always tend to lag behind, and there's no good reason why we would enumerate the supported hardware. Instead we could do something more useful and explain what there is to know about tagging protocols instead. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Tobias Waldekranz <tobias@waldekranz.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
While preparing some slides for a customer presentation, I found the existing high-level view to be a bit confusing, so I modified it a little bit. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Tobias Waldekranz <tobias@waldekranz.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queueDavid S. Miller authored
Tony Nguyen says: ==================== 1GbE Intel Wired LAN Driver Updates 2021-03-15 This series contains updates to e1000e only. Chen Yu says: The NIC is put in runtime suspend status when there is no cable connected. As a result, it is safe to keep non-wakeup NIC in runtime suspended during s2ram because the system does not rely on the NIC plug event nor WoL to wake up the system. Besides that, unlike the s2idle, s2ram does not need to manipulate S0ix settings during suspend. ==================== Acked-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yejune Deng authored
proc_creat_seq() that directly take a struct seq_operations, and deal with network namespaces in ->open. Signed-off-by: Yejune Deng <yejune.deng@gmail.com> Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Nikolay Aleksandrov says: ==================== net: bridge: mcast: simplify allow/block EHT code The set does two minor cleanups of the EHT allow/block handling code: patch 01 removes code which is unreachable (it was used in initial EHT versions, but isn't anymore) and prepares the allow/block functions to be factored out. Patch 02 factors out common allow/block handling code. There are no functional changes. v2: send patch 02 and the proper version of both patches ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nikolay Aleksandrov authored
We hande EHT state change for ALLOW messages in INCLUDE mode and for BLOCK messages in EXCLUDE mode similarly - create the new set entries with the proper filter mode. We also handle EHT state change for ALLOW messages in EXCLUDE mode and for BLOCK messages in INCLUDE mode in a similar way - delete the common entries (current set and new set). Factor out all the common code as follows: - ALLOW/INCLUDE, BLOCK/EXCLUDE: call __eht_create_set_entries() - ALLOW/EXCLUDE, BLOCK/INCLUDE: call __eht_del_common_set_entries() The set entries creation can be reused in __eht_inc_exc() as well. Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nikolay Aleksandrov authored
In the initial EHT versions there were common functions which handled allow/block messages for both INCLUDE and EXCLUDE modes, but later they were separated. It seems I've left some common code which cannot be reached because the filter mode is checked before calling the respective functions, i.e. the host filter is always in EXCLUDE mode when using __eht_allow_excl() and __eht_block_excl() thus we can drop the host_excl checks inside and simplify the code a bit. Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
DENG Qingfang authored
Support port MDB and bridge flag operations. As the hardware can manage multicast forwarding itself, offload_fwd_mark can be unconditionally set to true. Signed-off-by: DENG Qingfang <dqfext@gmail.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Álvaro Fernández Rojas says: ==================== net: mdio: Add BCM6368 MDIO mux bus controller This controller is present on BCM6318, BCM6328, BCM6362, BCM6368 and BCM63268 SoCs. v2: add changes suggested by Andrew Lunn and Jakub Kicinski. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Álvaro Fernández Rojas authored
This controller is present on BCM6318, BCM6328, BCM6362, BCM6368 and BCM63268 SoCs. Signed-off-by: Álvaro Fernández Rojas <noltari@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Álvaro Fernández Rojas authored
Add documentations for bcm6368 mdio mux driver. Signed-off-by: Álvaro Fernández Rojas <noltari@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Xie He authored
There are two "netif_running" checks in this driver. One is in "lapbeth_xmit" and the other is in "lapbeth_rcv". They serve to make sure that the LAPB APIs called in these functions are called before "lapb_unregister" is called by the "ndo_stop" function. However, these "netif_running" checks are unreliable, because it's possible that immediately after "netif_running" returns true, "ndo_stop" is called (which causes "lapb_unregister" to be called). This patch adds locking to make sure "lapbeth_xmit" and "lapbeth_rcv" can reliably check and ensure the netif is running while doing their work. Fixes: 1da177e4 ("Linux-2.6.12-rc2") Signed-off-by: Xie He <xie.he.0141@gmail.com> Acked-by: Martin Schiller <ms@dev.tdt.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Alex Elder says: ==================== net: ipa: QMI fixes Mani Sadhasivam discovered some errors in the definitions of some QMI messages used for IPA. This series addresses those errors, and extends the definition of one message type to include some newly-defined fields. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
The specified format of the INDICATION_REGISTER QMI request message has been extended to support two more optional fields: endpoint_desc_ind: sender wishes to receive endpoint descriptor information via an IPA ENDP_DESC indication QMI message bw_change_ind: sender wishes to receive bandwidth change information via an IPA BW_CHANGE indication QMI message Add definitions that permit these fields to be formatted and parsed by the QMI library code. Signed-off-by: Alex Elder <elder@linaro.org> Acked-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
The ipa_init_modem_driver_req_ei[] encoding array for the INIT_MODEM_DRIVER request message has some errors in it. First, the tlv_type associated with the hw_stats_quota_size field is wrong; it duplicates the valiue used for the hw_stats_quota_base_addr field (0x1f) and should use 0x20 instead. The tlv_type value for the hw_stats_drop_size field also uses the same duplicate value; it should use 0x22 instead. Second, there is no definition for the hw_stats_drop_base_addr field. It is an optional 32-bit enumerated type value. Finally, the hw_stats_quota_base_addr, hw_stats_quota_size, and hw_stats_drop_size fields are defined as enumerated types; they should be unsigned 4-byte values. Reported-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Alex Elder <elder@linaro.org> Acked-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
In the ipa_indication_register_req_ei[] encoding array, the tlv_type associated with the ipa_mhi_ready_ind field is wrong. It duplicates the value used for the data_usage_quota_reached field (0x11) and should use value 0x12 instead. Fix this bug. Reported-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Alex Elder <elder@linaro.org> Acked-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wei Yongjun authored
The return value 'rc' maybe overwrite to 0 in the flow_action_for_each loop, the error code from the offload not support error handling will not set. This commit fix it to return -EOPNOTSUPP. Fixes: 6a56e199 ("flow_offload: reject configuration of packet-per-second policing in offload drivers") Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Álvaro Fernández Rojas authored
Add missing of_match_table to allow device tree probing. Signed-off-by: Álvaro Fernández Rojas <noltari@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Calvin Johnson authored
Alphabetically sort header inclusion Signed-off-by: Calvin Johnson <calvin.johnson@oss.nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Shannon Nelson says: ==================== ionic Tx updates Just as the Rx path recently got a face lift, it is time for the Tx path to get some attention. The original TSO-to-descriptor mapping was ugly and convoluted and needed some deep work. This series pulls the dma mapping out of the descriptor frag mapping loop and makes the dma mapping more generic for use in the non-TSO case. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
Gather the Tx packet and byte counts and call netdev_tx_completed_queue() only once per clean cycle. Signed-off-by: Shannon Nelson <snelson@pensando.io> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
The descriptor mappings are set up the same way whether or not it is a TSO, so we don't need separate logic for the two cases. Signed-off-by: Shannon Nelson <snelson@pensando.io> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
Make the new ionic_tx_map_tso() usable by the non-TSO paths, and pull the call up a level into ionic_tx() before calling the csum or no-csum routines. Signed-off-by: Shannon Nelson <snelson@pensando.io> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
One issue with the original TSO code was that it was working too hard to deal with skb layouts that were never going to show up, such as an skb->data that was longer than a single descriptor's length. The other issue was trying to arrange the fragment dma mapping at the same time as figuring out the descriptors needed. There was just too much going on at the same time. Now we do the dma mapping first, which sets up the buffers with skb->data in buf[0] and the remaining frags in buf[1..n-1]. Next we spread the bufs across the descriptors needed, where each descriptor gets up to mss number of bytes. Signed-off-by: Shannon Nelson <snelson@pensando.io> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Alex Elder says: ==================== net: qualcomm: rmnet: stop using C bit-fields Version 6 is the same as version 5, but has been rebased on updated net-next/master. With any luck, the patches I'm sending out this time won't contain garbage. Version 5 of this series responds to a suggestion made by Alexander Duyck, to determine the offset to the checksummed range of a packet using skb_network_header_len() on patch 2. I have added his Reviewed-by tag to all (other) patches, and removed Bjorn's from patch 2. The change required some updates to the subsequent patches, and I reordered some assignments in a minor way in the last patch. I don't expect any more discussion on this series (but will respond if there is any). So at this point I would really appreciate it if KS and/or Sean would offer a review, or at least acknowledge it. I presume you two are able to independently test the code as well, so I request that, and hope you are willing to do so. Version 4 of this series is here: https://lore.kernel.org/netdev/20210315133455.1576188-1-elder@linaro.org ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
Replace the use of C bit-fields in the rmnet_map_ul_csum_header structure with a single two-byte (big endian) structure member, and use masks to encode or get values within it. The content of these fields can be accessed using simple bitwise AND and OR operations on the (host byte order) value of the new structure member. Previously rmnet_map_ipv4_ul_csum_header() would update C bit-field values in host byte order, then forcibly fix their byte order using a combination of byte swap operations and types. Instead, just compute the value that needs to go into the new structure member and save it with a simple byte-order conversion. Make similar simplifications in rmnet_map_ipv6_ul_csum_header(). Finally, in rmnet_map_checksum_uplink_packet() a set of assignments zeroes every field in the upload checksum header. Replace that with a single memset() operation. Signed-off-by: Alex Elder <elder@linaro.org> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
Replace the use of C bit-fields in the rmnet_map_dl_csum_trailer structure with a single one-byte field, using constant field masks to encode or get at embedded values. Signed-off-by: Alex Elder <elder@linaro.org> Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
The actual layout of bits defined in C bit-fields (e.g. int foo : 3) is implementation-defined. Structures defined in <linux/if_rmnet.h> address this by specifying all bit-fields twice, to cover two possible layouts. I think this pattern is repetitive and noisy, and I find the whole notion of compiler "bitfield endianness" to be non-intuitive. Stop using C bit-fields for the command/data flag and the pad length fields in the rmnet_map structure, and define a single-byte flags field instead. Define a mask for the single-bit "command" flag, and another mask for the encoded pad length. The content of both fields can be accessed using a simple bitwise AND operation. Signed-off-by: Alex Elder <elder@linaro.org> Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
The following macros, defined in "rmnet_map.h", assume a socket buffer is provided as an argument without any real indication this is the case. RMNET_MAP_GET_MUX_ID() RMNET_MAP_GET_CD_BIT() RMNET_MAP_GET_PAD() RMNET_MAP_GET_CMD_START() RMNET_MAP_GET_LENGTH() What they hide is pretty trivial accessing of fields in a structure, and it's much clearer to see this if we do these accesses directly. So rather than using these accessor macros, assign a local variable of the map header pointer type to the socket buffer data pointer, and derereference that pointer variable. In "rmnet_map_data.c", use sizeof(object) rather than sizeof(type) in one spot. Also, there's no need to byte swap 0; it's all zeros irrespective of endianness. Signed-off-by: Alex Elder <elder@linaro.org> Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
In rmnet_map_ipv4_ul_csum_header() and rmnet_map_ipv6_ul_csum_header() the offset within a packet at which checksumming should commence is calculated. This calculation involves byte swapping and a forced type conversion that makes it hard to understand. Simplify this by computing the offset in host byte order, then converting the result when assigning it into the header field. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
The fields in the checksum trailer structure used for QMAP protocol RX packets are all big-endian format, so define them that way. It turns out these fields are never actually used by the RMNet code. The start offset is always assumed to be zero, and the length is taken from the other packet headers. So making these fields explicitly big endian has no effect on the behavior of the code. Signed-off-by: Alex Elder <elder@linaro.org> Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Chen Yu authored
Although there is platform issue of runtime suspend support on CNP, it would be more flexible to let the user decide whether to disable runtime or not because: 1. This can be done in userspace via echo on > /sys/devices/pci0000\:00/0000\:00\:1f.d/power/control 2. More and more NICs would support runtime suspend, disabling the runtime suspend on them by default would impact the validation. Only disable runtime suspend on CNP in case of any user space regression. Signed-off-by: Chen Yu <yu.c.chen@intel.com> Tested-by: Dvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
Chen Yu authored
The NIC is put in runtime suspend status when there is no cable connected. As a result, it is safe to keep non-wakeup NIC in runtime suspended during s2ram because the system does not rely on the NIC plug event nor WoL to wake up the system. Besides that, unlike the s2idle, s2ram does not need to manipulate S0ix settings during suspend. This patch introduces the .prepare() for e1000e so that if the NIC is runtime suspended the subsequent suspend/resume hooks will be skipped so as to speed up the s2ram. The pm core will check whether the NIC is a wake up device so there's no need to check it again in .prepare(). DPM_FLAG_SMART_PREPARE flag should be set during probe to ask the pci subsystem to honor the driver's prepare() result. Besides, the NIC remains runtime suspended after resumed from s2ram as there is no need to resume it. Tested on i7-2600K with 82579V NIC Before the patch: e1000e 0000:00:19.0: pci_pm_suspend+0x0/0x160 returned 0 after 225146 usecs e1000e 0000:00:19.0: pci_pm_resume+0x0/0x90 returned 0 after 140588 usecs After the patch: echo disabled > //sys/devices/pci0000\:00/0000\:00\:19.0/power/wakeup becomes 0 usecs because the hooks will be skipped. Suggested-by: Kai-Heng Feng <kai.heng.feng@canonical.com> Signed-off-by: Chen Yu <yu.c.chen@intel.com> Tested-by: Dvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
- 15 Mar, 2021 2 commits
-
-
Alex Elder authored
In review, Alexander Duyck suggested that ipa_table_hash_support() was trivial enough that it could be implemented as a static inline function in the header file. But the patch had already been accepted. Implement his suggestion. Signed-off-by: Alex Elder <elder@linaro.org> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Bornyakov authored
Add basic support for the Marvell 88X2222 multi-speed ethernet transceiver. This PHY provides data transmission over fiber-optic as well as Twinax copper links. The 88X2222 supports 2 ports of 10GBase-R and 1000Base-X on the line-side interface. The host-side interface supports 4 ports of 10GBase-R, RXAUI, 1000Base-X and 2 ports of XAUI. This driver, however, supports only XAUI on the host-side and 1000Base-X/10GBase-R on the line-side, for now. The SGMII is also supported over 1000Base-X. Interrupts are not supported. Internal registers access compliant with the Clause 45 specification. Signed-off-by: Ivan Bornyakov <i.bornyakov@metrotek.ru> Signed-off-by: David S. Miller <davem@davemloft.net>
-