- 14 Apr, 2021 4 commits
-
-
David S. Miller authored
Ivan Bornyakov says: ==================== net: phy: marvell-88x2222: a couple of improvements First, there are some SFP modules that only uses RX_LOS for link indication. Add check that link is operational before actual read of line-side status. Second, it is invalid to set 10G speed without autonegotiation, according to phy_ethtool_ksettings_set(). Implement switching between 10GBase-R and 1000Base-X/SGMII if autonegotiation can't complete but there is signal in line. Changelog: v1 -> v2: * make checking that link is operational more friendly for trancievers without SFP cages. * split swapping 1G/10G modes into non-functional and functional commits for the sake of easier review. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Bornyakov authored
Setting 10G without autonegotiation is invalid according to phy_ethtool_ksettings_set(). Thus, we need to set it during autonegotiation. If 1G autonegotiation can't complete for quite a time, but there is signal in line, switch line interface type to 10GBase-R, if supported, in hope for link to be established. And vice versa. If 10GBase-R link can't be established for quite a time, and autonegotiation is enabled, and there is signal in line, switch line interface type to appropriate 1G mode, i.e. 1000Base-X or SGMII, if supported. Signed-off-by: Ivan Bornyakov <i.bornyakov@metrotek.ru> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Bornyakov authored
No functional changes, just move read link status routines below autonegotiation configuration to make future functional changes more distinct. Signed-off-by: Ivan Bornyakov <i.bornyakov@metrotek.ru> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Bornyakov authored
Some SFP modules uses RX_LOS for link indication. In such cases link will be always up, even without cable connected. RX_LOS changes will trigger link_up()/link_down() upstream operations. Thus, check that SFP link is operational before actual read link status. If there is no SFP cage connected to the tranciever, check only PMD Recieve Signal Detect register. Signed-off-by: Ivan Bornyakov <i.bornyakov@metrotek.ru> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 13 Apr, 2021 36 commits
-
-
Arnd Bergmann authored
The driver was removed last year, but the static initialization got left behind by accident. Fixes: a10079c6 ("staging: remove hp100 driver") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ioana Ciornei says: ==================== dpaa2-switch: add tc hardware offload on ingress traffic This patch set adds tc hardware offload on ingress traffic in dpaa2-switch. The cls flower and matchall classifiers are supported using the same ACL infrastructure supported by the dpaa2-switch. The first patch creates a new structure to hold all the necessary information related to an ACL table. This structure is used in the next patches to create a link between each switch port and the table used. Multiple ports can share the same ACL table when they also share the ingress tc block. Also, some small changes in the priority of the default STP trap is done in the second patch. The support for cls flower is added in the 3rd patch, while the 4th one builds on top of the infrastructure put in place and adds cls matchall support. The following flow keys are supported: - Ethernet: dst_mac/src_mac - IPv4: dst_ip/src_ip/ip_proto/tos - VLAN: vlan_id/vlan_prio/vlan_tpid/vlan_dei - L4: dst_port/src_port Each filter can support only one action from the following list: - drop - mirred egress redirect - trap With the last patch, we reuse the dpaa2_switch_acl_entry_add() function added previously instead of open-coding the install of a new ACL entry into the table. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ioana Ciornei authored
Since we added the dpaa2_switch_acl_entry_add() function in the previous patches to hide all the details of actually adding the ACL entry by issuing a firmware command, let's use it also for adding a CPU trap for the STP frames. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ioana Ciornei authored
Add support TC_SETUP_CLSMATCHALL by using the same ACL table entries framework as for tc flower. Adding a matchall rule is done by installing an entry which has a mask of all zeroes, thus matching on any packet. This can be used as a catch-all type of rule if used correctly, ie the priority of the matchall filter should be kept as the lowest one in the entire filter block. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ioana Ciornei authored
This patch adds support for tc flower hardware offload on the ingress path. Shared filter blocks are supported by sharing a single ACL table between multiple ports. The following flow keys are supported: - Ethernet: dst_mac/src_mac - IPv4: dst_ip/src_ip/ip_proto/tos - VLAN: vlan_id/vlan_prio/vlan_tpid/vlan_dei - L4: dst_port/src_port As per flow actions, the following are supported: - drop - mirred egress redirect - trap Each ACL entry (filter) can be setup with only one of the listed actions. A sorted single linked list is used to keep the ACL entries by their order of priority. When adding a new filter, this enables us to quickly ascertain if the new entry has the highest priority of the entire block or if we should make some space in the ACL table by increasing the priority of the filters already in the table. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ioana Ciornei authored
Change the default ACL trap rule for STP frames to have the highest priority. In the same ACL table will reside both default rules added by the driver for its internal use as well as rules added with tc flower. In this case, the default rules such as the STP one that we already have should have the highest priority. Also, remove the check for a full ACL table since we already know that it's sized so that we don't hit this case. The last thing changes is that default trap filters will not be counted in the acl_tbl's num_rules variable since their number doesn't change. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ioana Ciornei authored
Introduce a new structure - dpaa2_switch_acl_tbl - to hold all data related to an ACL table: number of rules added, ACL table id, etc. This will be used more in the next patches when adding support for sharing an ACL table between ports. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
The copy_to_user() function returns the number of bytes that it wasn't able to copy. We want to return -EFAULT to the user. Fixes: fee6efce ("ionic: add hw timestamp support files") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Shannon Nelson <snelson@pensando.io> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ong Boon Leong says: ==================== stmmac: add XDP ZC support This is the v2 patch series to add XDP ZC support to stmmac driver. Summary of v2 patch change:- 6/7: fix synchronize_rcu() is called stmmac_disable_all_queues() that is used by ndo_setup_tc(). ######################################################################## Continuous burst traffics are generated by pktgen script and in the midst of each packet processing operation by xdpsock the following tc-loop.sh script is looped continuously:- #!/bin/bash tc qdisc del dev eth0 parent root tc qdisc add dev eth0 ingress tc qdisc add dev eth0 root mqprio num_tc 4 map 0 1 2 3 0 0 0 0 0 0 0 0 0 0 0 0 queues 1@0 1@1 1@2 1@3 hw 0 tc filter add dev eth0 parent ffff: protocol 802.1Q flower vlan_prio 0 hw_tc 0 tc filter add dev eth0 parent ffff: protocol 802.1Q flower vlan_prio 1 hw_tc 1 tc filter add dev eth0 parent ffff: protocol 802.1Q flower vlan_prio 2 hw_tc 2 tc filter add dev eth0 parent ffff: protocol 802.1Q flower vlan_prio 3 hw_tc 3 tc qdisc list dev eth0 tc filter show dev eth0 ingress On different ssh terminal $ while true; do ./tc-loop.sh; sleep 1; done The v2 patch series have been tested using the xdpsock app: $ ./xdpsock -i eth0 -l -z From xdpsock poller pps report and dmesg, we don't find any warning related to rcu and the only difference when the script is executed is the pps rate drops momentarily. sock0@eth0:0 l2fwd xdp-drv pps pkts 1.00 rx 436347 191361334 tx 436411 191361334 sock0@eth0:0 l2fwd xdp-drv pps pkts 1.00 rx 254117 191615476 tx 254053 191615412 sock0@eth0:0 l2fwd xdp-drv pps pkts 1.00 rx 466395 192081924 tx 466395 192081860 sock0@eth0:0 l2fwd xdp-drv pps pkts 1.00 rx 287410 192369365 tx 287474 192369365 sock0@eth0:0 l2fwd xdp-drv pps pkts 1.00 rx 395853 192765329 tx 395789 192765265 sock0@eth0:0 l2fwd xdp-drv pps pkts 1.00 rx 466132 193231514 tx 466132 193231450 ######################################################################## Based on the above result, the fix looks promising. Appreciate that if community can help to review the patch series and provide me feedback for improvement. ====================
-
Ong Boon Leong authored
We add the support of XDP ZC TX submission and cleaning into stmmac_tx_clean(). The function is made to clean as many TX complete frames as possible, i.e. limit by priv->dma_tx_size instead of NAPI budget. For TX ring that is associated with XSK pool, the function stmmac_xdp_xmit_zc() is introduced to TX frame buffers from XSK pool by using xsk_tx_peek_desc(). To make stmmac_tx_clean() support the cleaning of XSK TX frames, STMMAC_TXBUF_T_XSK_TX TX buffer type is introduced. As stmmac_tx_clean() uses the return value to cue whether NAPI function should continue to poll, we augment the caller of stmmac_tx_clean() to pass NAPI budget instead of priv->dma_tx_size through 'budget' input and made stmmac_tx_clean() to always clean up-to the TX ring size instead. This allows us to use the return boolean status of stmmac_xdp_xmit_zc() to decide if XSK TX work is done or not: If true, set 'xmits' to return 'budget - 1' so that NAPI poll may exit. Else, set 'xmits' to return 'budget' to make NAPI poll continue to poll since XSK TX work is not done. Finally, at the end of stmmac_tx_clean(), the function now take a maximum value between 'count' and 'xmits' so that status from both TX cleaning and XSK TX (only for XDP ZC) is considered. This patch adds a new NAPI poll called stmmac_napi_poll_rxtx() that is meant to be enabled/disabled for RX and TX ring that are bound to XSK pool. This NAPI poll function starts with cleaning TX ring, then submits XSK TX frames to TX ring before proceed to perform RX operations, i.e. , receiving RX frames and replenishing RX ring with RX free buffers obtained from XSK pool. Therefore, during XSK RX and TX setup, the driver enables stmmac_napi_poll_rxtx() for RX and TX operations, then during XSK RX and TX pool tear-down, the driver reenables the exisiting independent NAPI poll functions accordingly: stmmac_napi_poll_rx() and stmmac_napi_poll_tx(). Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ong Boon Leong authored
This patch adds the support for receiving packet via AF_XDP zero-copy mechanism. XDP ZC uses 1:1 mapping of XDP buffer to receive packet, therefore the use of split header is not used currently. The 'xdp_buff' is declared as union together with a struct that contains 'page', 'addr' and 'page_offset' that are associated with primary buffer. RX buffers are now allocated either via page_pool or xsk pool. For RX buffers from xsk_pool they are allocated and deallocated using below functions: * stmmac_alloc_rx_buffers_zc(struct stmmac_priv *priv, u32 queue) * dma_free_rx_xskbufs(struct stmmac_priv *priv, u32 queue) With above functions now available, we then extend the following driver functions to support XDP ZC: * stmmac_reinit_rx_buffers() * __init_dma_rx_desc_rings() * init_dma_rx_desc_rings() * __free_dma_rx_desc_resources() Note: stmmac_alloc_rx_buffers_zc() may return -ENOMEM due to RX XDP buffer pool is not allocated (e.g. samples/bpf/xdpsock TX-only). But, it is still ok to let TX XDP ZC to continue, therefore, the -ENOMEM is silently ignored to let the driver succcessfully transition to XDP ZC mode for the said RX and TX queue. As XDP ZC buffer size is different, the DMA buffer size is required to be reprogrammed accordingly for RX DMA/Queue that is populated with XDP buffer from XSK pool. Next, to add or remove per-queue XSK pool, stmmac_xdp_setup_pool() will call stmmac_xdp_enable_pool() or stmmac_xdp_disable_pool() that in-turn coordinates the tearing down and setting up RX ring via RX buffers and descriptors removal and reallocation through stmmac_disable_rx_queue() and stmmac_enable_rx_queue(). In addition, stmmac_xsk_wakeup() is added to initiate XDP RX buffer replenishing by signalling user application to add available XDP frames back to FILL queue. For RX processing using XDP zero-copy buffer, stmmac_rx_zc() is introduced which is implemented with the assumption that RX split header is disabled. For XDP verdict is XDP_PASS, the XDP buffer is copied into a sk_buff allocated through stmmac_construct_skb_zc() and sent to Linux network GRO inside stmmac_dispatch_skb_zc(). Free RX buffers are then replenished using stmmac_rx_refill_zc() v2: introduce __stmmac_disable_all_queues() to contain the original code that does napi_disable() and then make stmmac_setup_tc_block_cb() to use it. Move synchronize_rcu() into stmmac_disable_all_queues() that eventually calls __stmmac_disable_all_queues(). Then, make both stmmac_release() and stmmac_suspend() to use stmmac_disable_all_queues(). Thanks David Miller for spotting the synchronize_rcu() issue in v1 patch. Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ong Boon Leong authored
Prepare stmmac_xdp_run_prog() for AF_XDP zero-copy support which will be added by upcoming patches by splitting out the XDP verdict processing into __stmmac_xdp_run_prog() and it callable for XDP ZC path which does not need to verify bpf_prog is not NULL. The stmmac_xdp_run_prog() is used for regular XDP Rx path which requires bpf_prog to be verified. Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ong Boon Leong authored
Below functions are made to be per-queue in preparation of XDP ZC: __init_dma_rx_desc_rings(struct stmmac_priv *priv, u32 queue, gfp_t flags) __init_dma_tx_desc_rings(struct stmmac_priv *priv, u32 queue) The original functions below are stay maintained for all queue usage: init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags) init_dma_tx_desc_rings(struct net_device *dev) Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ong Boon Leong authored
The per-queue RX buffer allocation in stmmac_reinit_rx_buffers() can be made to use stmmac_alloc_rx_buffers() by merging the page_pool alloc checks for "buf->page" and "buf->sec_page" in stmmac_init_rx_buffers(). This is in preparation for XSK pool allocation later. Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ong Boon Leong authored
Rearrange RX buffer page_pool recycling logics into dma_recycle_rx_skbufs, so that we prepare stmmac_reinit_rx_buffers() for XSK pool expansion. Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ong Boon Leong authored
This patch restructures the per RX queue buffer allocation from page_pool to stmmac_alloc_rx_buffers(). We also rearrange dma_free_rx_skbufs() so that it can be used in init_dma_rx_desc_rings() during freeing of RX buffer in the event of page_pool allocation failure to replace the more efficient method earlier. The replacement is needed to make the RX buffer alloc and free method scalable to XDP ZC xsk_pool alloc and free later. Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Alex Elder says: ==================== net: ipa: add support for the SM8350 SoC This small series adds IPA driver support for the Qualcomm SM8350 SoC, which implements IPA v4.9. The first patch updates the DT binding, and depends on a previous patch that has already been accepted into net-next. The second just defines the IPA v4.9 configuration data file. (Device Tree files to support this SoC will be sent separately and will go through the Qualcomm tree.) ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
Add support for the SM8350 SoC, which includes IPA version 4.9. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
Add support for "qcom,sm8350-ipa", which uses IPA v4.9. Use "enum" rather than "oneOf/const ..." to specify compatible strings, as suggested by Rob Herring. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
All the uses of HWTSTAMP_FILTER_* values need to be bit shifters, not straight values. v2: fixed subject and added Cc Dan and SoB Allen Fixes: f8ba81da ("ionic: add ethtool support for PTP") Cc: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Shannon Nelson <snelson@pensando.io> Signed-off-by: Allen Hubbe <allenbh@pensando.io> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Lijun Pan authored
The reset process for ibmvnic commonly takes multiple seconds, clearly making it inappropriate for schedule_work/system_wq. The reason to make this change is that ibmvnic's use of the default system-wide workqueue for a relatively long-running work item can negatively affect other workqueue users. So, queue the relatively slow reset job to the system_long_wq. Suggested-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Lijun Pan <lijunp213@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Merge tag 'linux-can-next-for-5.13-20210413' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next Marc Kleine-Budde says: ==================== pull-request: can-next 2021-04-13 this is a pull request of 14 patches for net-next/master. The first patch is by Yoshihiro Shimoda and updates the DT bindings for the rcar_can driver. Vincent Mailhol contributes 3 patches that add support for several ETAS USB CAN adapters. The final 10 patches are by me and clean up the peak_usb CAN driver. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yang Li authored
Fix the following versioncheck warning: ./drivers/net/wireless/rsi/rsi_91x_ps.c: 19 linux/version.h not needed. Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
wengjianfeng authored
in st_nci_spi_write function, first assign a value to a variable then goto exit label. return statement just follow the label and exit label just used once, so we should directly return and remove exit label. Signed-off-by: wengjianfeng <wengjianfeng@yulong.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Lijun Pan authored
The current implementation relies on H_IOCTL call to issue a H_SESSION_ERR_DETECTED command to let the hypervisor to send a failover signal. However, it may not work if there is no backup device or if the vnic is already in error state, e.g., "ibmvnic 30000003 env3: rx buffer returned with rc 6". Add a last resort, that is to schedule a failover reset via CRQ command. Signed-off-by: Lijun Pan <lijunp213@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andreas Roeseler authored
The current icmp_rcv function drops all unknown ICMP types, including ICMP_EXT_ECHOREPLY (type 43). In order to parse Extended Echo Reply messages, we have to pass these packets to the ping_rcv function, which does not do any other filtering and passes the packet to the designated socket. Pass incoming RFC 8335 ICMP Extended Echo Reply packets to the ping_rcv handler instead of discarding the packet. Signed-off-by: Andreas Roeseler <andreas.a.roeseler@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Michael Walle says: ==================== of: net: support non-platform devices in of_get_mac_address() of_get_mac_address() is commonly used to fetch the MAC address from the device tree. It also supports reading it from a NVMEM provider. But the latter is only possible for platform devices, because only platform devices are searched for a matching device node. Add a second method to fetch the NVMEM cell by a device tree node instead of a "struct device". Moreover, the NVMEM subsystem will return dynamically allocated data which has to be freed after use. Currently, this is handled by allocating a device resource manged buffer to store the MAC address. of_get_mac_address() then returns a pointer to this buffer. Without a device, this trick is not possible anymore. Thus, change the of_get_mac_address() API to have the caller supply a buffer. It was considered to use the network device to attach the buffer to, but then the order matters and netdev_register() has to be called before of_get_mac_address(). No driver does it this way. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Walle authored
of_get_mac_address() already supports fetching the MAC address by an nvmem provider. But until now, it was just working for platform devices. Esp. it was not working for DSA ports and PCI devices. It gets more common that PCI devices have a device tree binding since SoCs contain integrated root complexes. Use the nvmem of_* binding to fetch the nvmem cells by a struct device_node. We still have to try to read the cell by device first because there might be a nvmem_cell_lookup associated with that device. Signed-off-by: Michael Walle <michael@walle.cc> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Walle authored
of_get_mac_address() returns a "const void*" pointer to a MAC address. Lately, support to fetch the MAC address by an NVMEM provider was added. But this will only work with platform devices. It will not work with PCI devices (e.g. of an integrated root complex) and esp. not with DSA ports. There is an of_* variant of the nvmem binding which works without devices. The returned data of a nvmem_cell_read() has to be freed after use. On the other hand the return of_get_mac_address() points to some static data without a lifetime. The trick for now, was to allocate a device resource managed buffer which is then returned. This will only work if we have an actual device. Change it, so that the caller of of_get_mac_address() has to supply a buffer where the MAC address is written to. Unfortunately, this will touch all drivers which use the of_get_mac_address(). Usually the code looks like: const char *addr; addr = of_get_mac_address(np); if (!IS_ERR(addr)) ether_addr_copy(ndev->dev_addr, addr); This can then be simply rewritten as: of_get_mac_address(np, ndev->dev_addr); Sometimes is_valid_ether_addr() is used to test the MAC address. of_get_mac_address() already makes sure, it just returns a valid MAC address. Thus we can just test its return code. But we have to be careful if there are still other sources for the MAC address before the of_get_mac_address(). In this case we have to keep the is_valid_ether_addr() call. The following coccinelle patch was used to convert common cases to the new style. Afterwards, I've manually gone over the drivers and fixed the return code variable: either used a new one or if one was already available use that. Mansour Moufid, thanks for that coccinelle patch! <spml> @A@ identifier x; expression y, z; @@ - x = of_get_mac_address(y); + x = of_get_mac_address(y, z); <... - ether_addr_copy(z, x); ...> @@ identifier a.x; @@ - if (<+... x ...+>) {} @@ identifier a.x; @@ if (<+... x ...+>) { ... } - else {} @@ identifier a.x; expression e; @@ - if (<+... x ...+>@e) - {} - else + if (!(e)) {...} @@ expression x, y, z; @@ - x = of_get_mac_address(y, z); + of_get_mac_address(y, z); ... when != x </spml> All drivers, except drivers/net/ethernet/aeroflex/greth.c, were compile-time tested. Suggested-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Michael Walle <michael@walle.cc> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
René van Dorst authored
This patch adds EEE support. Signed-off-by: René van Dorst <opensource@vdorst.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Merge tag 'wireless-drivers-next-2021-04-13' of git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next Kalle Valo says: ==================== wireless-drivers-next patches for v5.13 First set of patches for v5.13. I have been offline for a couple of and I have a smaller pull request this time. The next one will be bigger. Nothing really special standing out. ath11k * add initial support for QCN9074, but not enabled yet due to firmware problems * enable radar detection for 160MHz secondary segment * handle beacon misses in station mode rtw88 * 8822c: support firmware crash dump mt7601u * enable TDLS support ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marc Kleine-Budde authored
This patch replaces the open coded endianness conversion of unaligned data by the appropriate get/put_unaligned_leXX() variants. Link: https://lore.kernel.org/r/20210406111622.1874957-11-mkl@pengutronix.deAcked-by: Stephane Grosjean <s.grosjean@peak-system.com> Tested-by: Stephane Grosjean <s.grosjean@peak-system.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
The function serial_number is only called from one location with a valid serial_number pointer. Remove not needed NULL pointer check. Link: https://lore.kernel.org/r/20210406111622.1874957-10-mkl@pengutronix.deAcked-by: Stephane Grosjean <s.grosjean@peak-system.com> Tested-by: Stephane Grosjean <s.grosjean@peak-system.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
This patch replaces the memcpy() + le32_to_cpu() by le32_to_cpup(). Link: https://lore.kernel.org/r/20210406111622.1874957-9-mkl@pengutronix.deAcked-by: Stephane Grosjean <s.grosjean@peak-system.com> Tested-by: Stephane Grosjean <s.grosjean@peak-system.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
The caller of pcan_usb_get_serial() already prints an error message, so remove this one and return immediately. Link: https://lore.kernel.org/r/20210406111622.1874957-8-mkl@pengutronix.deAcked-by: Stephane Grosjean <s.grosjean@peak-system.com> Tested-by: Stephane Grosjean <s.grosjean@peak-system.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
The callback struct peak_usb_adapter::dev_get_device_id, which is implemented by the functions pcan_usb_{,pro}_get_device_id() is only ever called with a valid device_id pointer. This patch removes the unneeded check if the device_id pointer is valid. Link: https://lore.kernel.org/r/20210406111622.1874957-7-mkl@pengutronix.deAcked-by: Stephane Grosjean <s.grosjean@peak-system.com> Tested-by: Stephane Grosjean <s.grosjean@peak-system.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-