- 21 Jul, 2020 40 commits
-
-
Russell King authored
Comparing the ethtool output from phylink and non-phylink fixed-link setups shows that we have some differences: - The "auto-negotiation" fields are different; phylink reports these as "No", non-phylink reports these as "Yes" for the supported and advertising masks. - The link partner advertisement is set to the link speed with non- phylink, but phylink leaves this unset, causing all link partner fields to be omitted. The phylink ethtool output also disagrees with the software emulated PHY dump via the MII registers. Update the phylink fixed-link parsing code so that we better reflect the behaviour of the non-phylink code that this facility replaces, and bring the ethtool interface more into line with the report from via the MII interface. Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Claudiu Manoil says: ==================== enetc: Add adaptive interrupt coalescing Apart from some related cleanup patches, this set introduces in a straightforward way the support needed to enable and configure interrupt coalescing for ENETC. Patch 5 introduces the support needed for configuring the interrupt coalescing parameters and for switching between moderated (int. coalescing) and per-packet interrupt modes. When interrupt coalescing is enabled the Rx/Tx time thresholds are configurable, packet thresholds are fixed. To make this work reliably, patch 5 uses the traffic pause procedure introduced in patch 2. Patch 6 adds DIM (Dynamic Interrupt Moderation) to implement adaptive coalescing based on time thresholds, for the Rx 'channel'. On the Tx side a default optimal value is used instead, optimized for TCP traffic over 1G and 2.5G links. This default 'optimal' value can be overridden anytime via 'ethtool -C tx-usecs'. netperf -t TCP_MAERTS measurements show a significant CPU load reduction correlated w/ reduced interrupt rates. For the measurement results refer to the comments in patch 6. v2: Replaced Tx DIM with predefined optimal value, giving better results. This was also suggested by Jakub (cc). Switched order of patches 4 and 5, for better grouping. v3: minor cleanup/improvements ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Claudiu Manoil authored
Use the generic dynamic interrupt moderation (dim) framework to implement adaptive interrupt coalescing on Rx. With the per-packet interrupt scheme, a high interrupt rate has been noted for moderate traffic flows leading to high CPU utilization. The 'dim' scheme implemented by the current patch addresses this issue improving CPU utilization while using minimal coalescing time thresholds in order to preserve a good latency. On the Tx side use an optimal time threshold value by default. This value has been optimized for Tx TCP streams at a rate of around 85kpps on a 1G link, at which rate half of the Tx ring size (128) gets filled in 1500 usecs. Scaling this down to 2.5G links yields the current value of 600 usecs, which is conservative and gives good enough results for 1G links too (see next). Below are some measurement results for before and after this patch (and related dependencies) basically, for a 2 ARM Cortex-A72 @1.3Ghz CPUs system (32 KB L1 data cache), using 60secs log netperf TCP stream tests @ 1Gbit link (maximum throughput): 1) 1 Rx TCP flow, both Rx and Tx processed by the same NAPI thread on the same CPU: CPU utilization int rate (ints/sec) Before: 50%-60% (over 50%) 92k After: 13%-22% 3.5k-12k Comment: Major CPU utilization improvement for a single flow Rx TCP flow (i.e. netperf -t TCP_MAERTS) on a single CPU. Usually settles under 16% for longer tests. 2) 4 Rx TCP flows + 4 Tx TCP flows (+ pings to check the latency): Total CPU utilization Total int rate (ints/sec) Before: ~80% (spikes to 90%) ~100k After: 60% (more steady) ~4k Comment: Important improvement for this load test, while the ping test outcome does not show any notable difference compared to before. Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Claudiu Manoil authored
Enable programming of the interrupt coalescing registers and allow manual configuration of the coalescing time thresholds via ethtool. Packet thresholds have been fixed to predetermined values as there's no point in making them run-time configurable, also anticipating the dynamic interrupt moderation (DIM) algorithm which uses fixed packet thresholds as well. If the interface is up when the operation mode of traffic interrupt events is changed by the user (i.e. switching from default per-packet interrupts to coalesced interrupts), the traffic needs to be paused in the process. This patch also prepares the ground for introducing DIM on Rx. Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Claudiu Manoil authored
'struct enetc_bdr' is already '____cacheline_aligned_in_smp'. Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Claudiu Manoil authored
Interrupt coalescing registers naming in the current revision of the Ref Man (RM) is ICR, deprecating the ICIR name used in earlier (draft) versions of the RM. Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Claudiu Manoil authored
A reliable traffic pause (and reconfiguration) procedure is needed to be able to safely make h/w configuration changes during run-time, like changing the mode in which the interrupts are operating (i.e. with or without coalescing), as opposed to making on-the-fly register updates that may be subject to h/w or s/w concurrency issues. To this end, the code responsible of the run-time device configurations that basically starts resp. stops the traffic flow through the device has been extracted from the the enetc_open/_close procedures, to the separate standalone enetc_start/_stop procedures. Traffic stop should be as graceful as possible, it lets the executing napi threads to to finish while the interrupts stay disabled. But since the napi thread will try to re-enable interrupts by clearing the device's unmask register, the enable_irq/ disable_irq API has been used to avoid this potential concurrency issue and make the traffic pause procedure more reliable. Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Claudiu Manoil authored
It's time to differentiate between Rx and Tx ring sizes. Not only Tx rings are processed differently than Rx rings, but their default number also differs - i.e. up to 8 Tx rings per device (8 traffic classes) vs. 2 Rx rings (one per CPU). So let's set Tx rings sizes to half the size of the Rx rings for now, to be conservative. The default ring sizes were decreased as well (to the next lower power of 2), to reduce the memory footprint, buffering etc., since the measurements I've made so far show that the rings are very unlikely to get full. This change also anticipates the introduction of the dynamic interrupt moderation (dim) algorithm which operates on maximum packet thresholds of 256 packets for Rx and 128 packets for Tx. Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jisheng Zhang authored
Use devm_gpiod_get_array() to simplify the error handling and exit code path. Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
Now that DSA supports MTU configuration, undo the effects of commit 8b1efc0f ("net: remove MTU limits on a few ether_setup callers") and let DSA interfaces use the default min_mtu and max_mtu specified by ether_setup(). This is more important for min_mtu: since DSA is Ethernet, the minimum MTU is the same as of any other Ethernet interface, and definitely not zero. For the max_mtu, we have a callback through which drivers can override that, if they want to. Signed-off-by: Vladimir Oltean <olteanv@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jonathan McDowell authored
This switch has a single max frame size configuration register, so we track the requested MTU for each port and apply the largest. v2: - Address review feedback from Vladimir Oltean Signed-off-by: Jonathan McDowell <noodles@earth.li> Acked-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wang Hai authored
Because kfree_skb already checked NULL skb parameter, so the additional checks are unnecessary, just remove them. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Wang Hai <wanghai38@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christophe JAILLET authored
The wrappers in include/linux/pci-dma-compat.h should go away. The patch has been generated with the coccinelle script below and has been hand modified to replace GFP_ with a correct flag. It has been compile tested. When memory is allocated, GFP_KERNEL can be used because it is called from the probe function (i.e. 'fealnx_init_one()') and no lock is taken. @@ @@ - PCI_DMA_BIDIRECTIONAL + DMA_BIDIRECTIONAL @@ @@ - PCI_DMA_TODEVICE + DMA_TO_DEVICE @@ @@ - PCI_DMA_FROMDEVICE + DMA_FROM_DEVICE @@ @@ - PCI_DMA_NONE + DMA_NONE @@ expression e1, e2, e3; @@ - pci_alloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3; @@ - pci_zalloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3, e4; @@ - pci_free_consistent(e1, e2, e3, e4) + dma_free_coherent(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_single(e1, e2, e3, e4) + dma_map_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_single(e1, e2, e3, e4) + dma_unmap_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4, e5; @@ - pci_map_page(e1, e2, e3, e4, e5) + dma_map_page(&e1->dev, e2, e3, e4, e5) @@ expression e1, e2, e3, e4; @@ - pci_unmap_page(e1, e2, e3, e4) + dma_unmap_page(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_sg(e1, e2, e3, e4) + dma_map_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_sg(e1, e2, e3, e4) + dma_unmap_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_cpu(e1, e2, e3, e4) + dma_sync_single_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_device(e1, e2, e3, e4) + dma_sync_single_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_cpu(e1, e2, e3, e4) + dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_device(e1, e2, e3, e4) + dma_sync_sg_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2; @@ - pci_dma_mapping_error(e1, e2) + dma_mapping_error(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_dma_mask(e1, e2) + dma_set_mask(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_consistent_dma_mask(e1, e2) + dma_set_coherent_mask(&e1->dev, e2) Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christophe JAILLET authored
The wrappers in include/linux/pci-dma-compat.h should go away. The patch has been generated with the coccinelle script below and has been hand modified to replace GFP_ with a correct flag. It has been compile tested. When memory is allocated in 'setup_hw()' (hfcpci.c) GFP_KERNEL can be used because it is called from the probe function and no lock is taken. The call chain is: hfc_probe() --> setup_card() --> setup_hw() When memory is allocated in 'inittiger()' (netjet.c) GFP_ATOMIC must be used because a spin_lock is taken by the caller (i.e. 'nj_init_card()') This is also consistent with the other allocations done in the function. @@ @@ - PCI_DMA_BIDIRECTIONAL + DMA_BIDIRECTIONAL @@ @@ - PCI_DMA_TODEVICE + DMA_TO_DEVICE @@ @@ - PCI_DMA_FROMDEVICE + DMA_FROM_DEVICE @@ @@ - PCI_DMA_NONE + DMA_NONE @@ expression e1, e2, e3; @@ - pci_alloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3; @@ - pci_zalloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3, e4; @@ - pci_free_consistent(e1, e2, e3, e4) + dma_free_coherent(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_single(e1, e2, e3, e4) + dma_map_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_single(e1, e2, e3, e4) + dma_unmap_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4, e5; @@ - pci_map_page(e1, e2, e3, e4, e5) + dma_map_page(&e1->dev, e2, e3, e4, e5) @@ expression e1, e2, e3, e4; @@ - pci_unmap_page(e1, e2, e3, e4) + dma_unmap_page(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_sg(e1, e2, e3, e4) + dma_map_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_sg(e1, e2, e3, e4) + dma_unmap_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_cpu(e1, e2, e3, e4) + dma_sync_single_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_device(e1, e2, e3, e4) + dma_sync_single_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_cpu(e1, e2, e3, e4) + dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_device(e1, e2, e3, e4) + dma_sync_sg_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2; @@ - pci_dma_mapping_error(e1, e2) + dma_mapping_error(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_dma_mask(e1, e2) + dma_set_mask(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_consistent_dma_mask(e1, e2) + dma_set_coherent_mask(&e1->dev, e2) Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Briana Oursler authored
Add tdc to existing kselftest infrastructure so that it can be run with existing kselftests. TDC now generates objects in objdir/kselftest without cluttering main objdir, leaves source directory clean, and installs correctly in kselftest_install, properly adding itself to run_kselftest.sh script. Add tc-testing as a target of selftests/Makefile. Create tdc.sh to run tdc.py targets with correct arguments. To support single target from selftest/Makefile, combine tc-testing/bpf/Makefile and tc-testing/Makefile. Move action.c up a directory to tc-testing/. Tested with: make O=/tmp/{objdir} TARGETS="tc-testing" kselftest cd /tmp/{objdir} cd kselftest cd tc-testing ./tdc.sh make -C tools/testing/selftests/ TARGETS=tc-testing run_tests make TARGETS="tc-testing" kselftest cd tools/testing/selftests ./kselftest_install.sh /tmp/exampledir My VM doesn't run all the kselftests so I commented out all except my target and net/pmtu.sh then: cd /tmp/exampledir && ./run_kselftest.sh Co-developed-by: Davide Caratti <dcaratti@redhat.com> Signed-off-by: Davide Caratti <dcaratti@redhat.com> Signed-off-by: Briana Oursler <briana.oursler@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vinay Kumar Yadav authored
Enable tcp window scaling option in hw based on sysctl settings and option in connection request. v1->v2: - Set window scale option based on option in connection request. Signed-off-by: Vinay Kumar Yadav <vinay.yadav@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Mark Starovoytov says: ==================== net: atlantic: various features This patchset adds more features for Atlantic NICs: * media detect; * additional per-queue stats; * PTP stats; * ipv6 support for TCP LSO and UDP GSO; * 64-bit operations; * A0 ntuple filters; * MAC temperature (hwmon). This work is a joint effort of Marvell developers. v3: * reworked patches related to stats: . fixed u64_stats_update_* usage; . use simple assignment in _get_stats / _fill_stats_data; . made _get_sw_stats / _fill_stats_data return count as return value; . split rx and tx per-queue stats; v2: https://patchwork.ozlabs.org/cover/1329652/ * removed media detect feature (will be reworked and submitted later); * removed irq counter from stats; * use u64_stats_update_* to protect 64-bit stats; * use io-64-nonatomic-lo-hi.h for readq/writeq fallbacks; v1: https://patchwork.ozlabs.org/cover/1327894/ ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mark Starovoytov authored
This patch adds the possibility to obtain MAC temperature via hwmon. On A1 there are two separate temperature sensors. On A2 there's only one temperature sensor, which is used for reporting both MAC and PHY temperature. Signed-off-by: Mark Starovoytov <mstarovoitov@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dmitry Bogdanov authored
This patch adds support for ntuple filters on A0. Signed-off-by: Dmitry Bogdanov <dbogdanov@marvell.com> Signed-off-by: Mark Starovoytov <mstarovoitov@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nikita Danilov authored
This patch syncs up hw_atl_a0.c with an out-of-tree driver, where an intermediate variable was introduced in a couple of functions to improve the code readability a bit. Signed-off-by: Nikita Danilov <ndanilov@marvell.com> Signed-off-by: Mark Starovoytov <mstarovoitov@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mark Starovoytov authored
This patch replaces magic constant ~0U usage with U32_MAX in aq_hw_utils.c Signed-off-by: Mark Starovoytov <mstarovoitov@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Belous authored
This patch adds support for 64-bit reads/writes where applicable, e.g. A2 supports them. Signed-off-by: Pavel Belous <pbelous@marvell.com> Signed-off-by: Mark Starovoytov <mstarovoitov@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Igor Russkikh authored
This patch enables ipv6 support for TCP LSO and UDP GSO. The code itself (aq_nic_map_skb) was ready for this after udp gso feature, but corresponding NETIF_F_TSO6 wasn't enabled. We now have tested both tcp and udp v6 GSO, and enabling them safely. Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: Mark Starovoytov <mstarovoitov@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Belous authored
This patch adds PTP rings statistics. Before that these were missing from overall stats, hardening debugging and analysis. Signed-off-by: Pavel Belous <pbelous@marvell.com> Signed-off-by: Mark Starovoytov <mstarovoitov@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dmitry Bogdanov authored
This patch adds additional per-queue stats, these could be useful for debugging and diagnostics. Signed-off-by: Dmitry Bogdanov <dbogdanov@marvell.com> Signed-off-by: Mark Starovoytov <mstarovoitov@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mark Starovoytov authored
This patch adds u64_stats_update_* usage to protect access to 64-bit stats, where necessary. This is necessary for per-ring stats, because they are updated by the driver directly, so there is a possibility for a partial read. Other stats require no additional protection, e.g.: * all MACSec stats are fetched directly from HW (under semaphore); * nic/ndev stats (aq_stats_s) are fetched directly from FW (under mutex). Signed-off-by: Mark Starovoytov <mstarovoitov@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mark Starovoytov authored
This patch splits rx and tx per-queue stats. This change simplifies the follow-up introduction of PTP stats and u64_stats_update_* usage. Signed-off-by: Mark Starovoytov <mstarovoitov@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mark Starovoytov authored
This patch changes aq_vec_get_sw_stats() to return count as a return value (which was unused) instead of an out parameter. Signed-off-by: Mark Starovoytov <mstarovoitov@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mark Starovoytov authored
This patch replaces addition assignment operator with a simple assignment in aq_vec_get_stats() and aq_vec_get_sw_stats(), because it is sufficient in both cases and this change simplifies the introduction of u64_stats_update_* in these functions. Signed-off-by: Mark Starovoytov <mstarovoitov@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mark Starovoytov authored
This patch moves FRAC_PER_NS to aq_hw.h so that it can be used in both hw_atl (A1) and hw_atl2 (A2) in the future. Signed-off-by: Mark Starovoytov <mstarovoitov@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Vladimir Oltean says: ==================== Extend testptp with PTP perout waveform Demonstrate the usage of the newly introduced flags in the PTP_PEROUT_REQUEST2 ioctl: https://www.spinics.net/lists/netdev/msg669346.html ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
Extend the example program for PTP ancillary functionality with the ability to configure not only the periodic output's period (frequency), but also the phase and duty cycle (pulse width) which were newly introduced. The ioctl level also needs to be updated to the new PTP_PEROUT_REQUEST2, since the original PTP_PEROUT_REQUEST doesn't support this functionality. For an in-tree testing program, not having explicit backwards compatibility is fine, as it should always be tested with the current kernel headers and sources. Tested with an oscilloscope on the felix switch PHC: echo '2 0' > /sys/class/ptp/ptp1/pins/switch_1588_dat0 ./testptp -d /dev/ptp1 -p 1000000000 -w 100000000 -H 1000 -i 0 Signed-off-by: Vladimir Oltean <olteanv@gmail.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
Since 'perout' holds the nanosecond value of the signal's period, it should be a 64-bit value. Current assumption is that it cannot be larger than 1 second. Signed-off-by: Vladimir Oltean <olteanv@gmail.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vaibhav Gupta authored
Drivers using legacy PM have to manage PCI states and device's PM states themselves. They also need to take care of configuration registers. With improved and powerful support of generic PM, PCI Core takes care of above mentioned, device-independent, jobs. This driver makes use of PCI helper functions like pci_save/restore_state(), pci_enable/disable_device(), pci_set_power_state() and pci_set_master() to do required operations. In generic mode, they are no longer needed. Change function parameter in both .suspend() and .resume() to "struct device*" type. Use to_pci_dev() and dev_get_drvdata() to get "struct pci_dev*" variable and drv data. Compile-tested only. Signed-off-by: Vaibhav Gupta <vaibhavgupta40@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Alexander Lobakin says: ==================== qed, qede: add support for new operating modes This series covers the support for the following: - new port modes; - loopback modes, previously missing; - new speed/link modes; - several FEC modes; - multi-rate transceivers; and also cleans up and optimizes several related parts of code. v3 (from [2]): - dropped custom link mode declaration; qed, qede and qedf switched to Ethtool link modes and definitions (#0001, #0002, per Andrew Lunn's suggestion); - exchange more .text size to .initconst and .ro_after_init in qede (#0003). v2 (from [1]): - added a patch (#0010) that drops discussed dead struct member; - addressed checkpatch complaints on #0014 (former #0013); - rebased on top of latest net-next; - no other changes. [1] https://lore.kernel.org/netdev/20200716115446.994-1-alobakin@marvell.com/ [2] https://lore.kernel.org/netdev/20200719201453.3648-1-alobakin@marvell.com/ ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Lobakin authored
Add all necessary code (NVM parsing, MFW and Ethtool reports etc.) to support extended speed and FEC modes. These new modes are supported by the new boards revisions and newer MFW versions. Misc: correct port type for MEDIA_KR. Signed-off-by: Alexander Lobakin <alobakin@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Lobakin authored
Simplify and lighten qed_set_link() by declaring static link modes maps and populating them on module init. This way we save plenty of text size at the low expense of __ro_after_init and __initconst data (the latter will be purged after module init is done). Misc: sanitize exit callback. Signed-off-by: Alexander Lobakin <alobakin@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Lobakin authored
These modes are relevant only for several boards, but may be reported by MFW as well as the others. Signed-off-by: Alexander Lobakin <alobakin@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Lobakin authored
These ports ship on new boards revisions and are supported by newer firmware versions. Signed-off-by: Alexander Lobakin <alobakin@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Lobakin authored
Struct field qed_hw_info::port_mode isn't used anywhere in the code, so can be safely removed to prevent possible dead code addition. Also remove the enumeration QED_PORT_MODE orphaned after this deletion. Signed-off-by: Alexander Lobakin <alobakin@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-